AI wrap Friday 27th October
Gartner’s latest top 10 strategic technology trends is dominated by AI, of particular interest to our audience are #3 - AI-Augmented Development, where they predict increased automation of software development; and #5 - Augmented-Connected Workforce, the use of intelligent applications and workforce analytics to help scale talent. This same article goes on to report on the Gartner top 10 strategic predictions, of particular interest here are #3 by 2028 enterprise spending on battling malformation will surpass $30B; and #8 by 2028 there will be more smart robots than frontline workers in manufacturing, retail and logistics due to labour shortages. I also found #5 really interesting, the rate of unionisation among knowledge workers will increase by 1000% by 2028, motivated by the adoption of GenAI.
There are other state of AI reports coming out at the moment along with quarterly results. Here is a nice summary from Ben’s Bites (one of the sites I find my AI news) on the latest Air Street Capital report.
A day in the life of AI - love this concept - This reporter made a stab at where their normal daily life interacted with AI, I’m going to pick a day next week to do the same and see how my life has been augmented by AI.
ChatGPT 4 didn’t let me just past in my URL’s to give me a summary of each article today so I had to feed it some of each article to get these summary paragraphs back - not ideal and didn’t save any time I don’t think. Here is a wrap of this weeks news in AI.
Nvidia ordered to stop exporting to China - The U.S. government has directed Nvidia, a leading chip designer, to immediately cease exporting specific high-end AI chips to China. This move comes earlier than the initially announced date set by the Biden administration, which aimed to prevent certain countries, including China, from obtaining advanced AI chips. While the reason for the expedited timeline remains unspecified, Nvidia anticipates no short-term earnings impact. Other semiconductor firms like Advanced Micro Devices and Intel are also implicated in these export restrictions, with particular focus on Nvidia's AI chips A800 and H800, tailored for the Chinese market.
Dropbox embraced AI and reduce offices - In 2020, Dropbox transitioned to a virtual-first work model, a move that its CEO, Drew Houston, believes enhanced the company's ability to integrate AI into their products. At a recent New York conference, Dropbox showcased new AI-driven tools, notably Dropbox Dash, an AI-powered universal search tool. While leveraging established AI models like OpenAI’s GPT and Meta’s Llama, Dropbox confronts unique challenges, especially around data privacy. Houston emphasizes that while many tech giants are integrating AI, Dropbox's approach, informed by its virtual work model, offers a distinctive edge in the market.
LLMs for dummies - Large language models (LLMs) have seen remarkable advancements recently. To simplify this topic, imagine if air travel speeds surged 1,500x in two years. The article "LLMs for Dummies" delves into the inner workings of LLMs, which operate in the digital domain, allowing them to progress exponentially, as Moore's Law suggests. LLMs use numerical vectors to depict words, grouping them by similarity. Training on vast datasets, such as GPT-3's 500 billion words, is essential for their proficiency. Despite their prowess, they can mirror human biases. Understanding the nuances and context in language sets them apart from regular computational tasks.
Autonomous passenger air taxi's - Guangzhou-based company Ehang has received an airworthiness "type certificate" from China's Civil Aviation Administration for its EH216-S AAV, a fully autonomous drone that can carry two passengers. This makes Ehang the first to obtain such a certificate, enabling it to operate passenger-carrying electric vertical take-off and landing (eVTOL) aircraft in China. The certification might simplify Ehang's endeavors to gain similar commercial approvals in the U.S., Europe, and Southeast Asia. Ehang's CEO, Huazhi Hu, anticipates international expansion next year, once mutual regulation processes are established. The company has seen its shares nearly double this year and aims to grow low-altitude tourism in collaboration with Xiyu Tourism.
Image details on AI-generated / enhanced images - Google has introduced a new feature called "About this image" in its Search for English language users globally. This tool allows users to verify the credibility and context of online images. It provides insights into an image's history, showing when it first appeared on Google Search and if it was previously published on other sites. Users can also see how the image is used on different webpages and what various sources, including news and fact-checking sites, say about it. Additionally, the tool displays available metadata, revealing details like whether the image was AI-generated or enhanced. Google's own AI-generated images will carry this specific markup.
Amazon Ads has unveiled an AI-driven image generation tool in beta, aimed at enhancing the ad experience for users and aiding advertisers in creating more effective ad campaigns. This development comes in response to a survey where 75% of advertisers identified the creation of ad creatives and selection of creative formats as major challenges. The new tool allows brands to produce lifestyle imagery, which can significantly improve ad performance. For instance, placing a product in a relatable context, such as a toaster on a kitchen counter, can increase click-through rates by up to 40% compared to standard product images.
AI to injest millions of words - A Google researcher, in collaboration with Databricks CTO Matei Zaharia and UC Berkeley's Pieter Abbeel, has developed a method allowing AI models to analyze millions of words, surpassing the current limit of about 75,000 words. Detailed in a recent research paper, this innovation could revolutionize how we engage with advanced AI tools. Presently, constraints in GPU memory set limits on the amount of data AI models can process, termed in terms of "tokens" and "context windows." For example, Anthropic's chatbot, Claude, can analyze up to 100,000 tokens, roughly equivalent to a book, while OpenAI's GPT-3.5 and GPT-4 have context lengths of 16,000 and 32,000 tokens respectively.
App prototyping from Google - Google's new multimodal AI model, Gemini, is set to debut on Makersuite, offering enhanced capabilities beyond its predecessor, PaLM 2. Gemini's introduction will provide users with multimodal functions, including image input support, as seen with Bard. However, the most notable revelation from the leaks is Stubbs, a groundbreaking feature enabling users to construct and launch AI-generated app prototypes directly from Makersuite. Despite no official mention by Google, Stubbs seems poised to revolutionize app prototyping, with a user-friendly interface allowing creation, deployment, and publication. Makersuite will also support language translations, and while Gemini's capabilities are extensive, the nature of its image output remains ambiguous. Contrary to some reports, Gemini isn't an addition to Bard but rather an integrated offering available through Makersuite for wider use.
Microsoft AI investment in Australia - Microsoft has announced plans to invest A$5 billion ($3.2 billion) over two years in Australia to bolster its AI and cloud computing capabilities, marking a 250% increase in its computing capacity. This expansion aims to meet Australia's anticipated doubling of cloud computing demand from 2022 to 2026. Amidst looming AI regulations in Australia, Microsoft's move is seen as a strategic effort to foster goodwill, especially after the introduction of the AI language program, ChatGPT, by Microsoft-backed OpenAI in 2022. Additionally, Microsoft commits to training 300,000 Australians for the digital economy and enhancing its cybersecurity collaboration with the Australian Signals Directorate. This massive investment underscores Microsoft's dedication to Australia's growth in the AI-dominated future.
AI risk for Humanity - Unchecked AI development by a few major tech companies threatens humanity's future, warns MIT professor Max Tegmark, who organized an open letter endorsed by tech leaders like Elon Musk, advocating for a six-month pause in large-scale AI projects. Tegmark emphasizes the need for AI safety standards and regulations to transform the ongoing "race to the bottom" into a "race to the top." A policy document by 23 AI specialists suggests that governments must be prepared to control, halt, or oversee the creation of exceptionally powerful AI models. This call for regulation is amidst huge investments in the sector, including Amazon's potential $4bn involvement with Anthropic, Microsoft's $13bn bet on OpenAI, and Alphabet's substantial AI and cloud computing investments. By 2025, global AI investments could reach $200bn, reshaping businesses and potentially boosting global productivity.
AI Bias in video games - The integration of artificial intelligence (AI) in video games offers immersive experiences but brings forth various ethical challenges. AI algorithms, if not properly managed, can inadvertently perpetuate biases and stereotypes in games. There's a growing concern about AI-enhanced games promoting addiction, particularly among vulnerable groups, with the industry lacking substantial frameworks to address these issues. AI-generated content, while cost-effective, might diminish the distinctiveness and creativity of games and introduce copyright challenges. The extensive data collection essential for AI-driven gaming raises issues about player privacy and the need for informed consent. Additionally, in multiplayer games, AI-driven moderation systems should operate transparently to ensure fairness. To navigate these challenges, ongoing dialogue among players, developers, and regulators is crucial. The industry must balance AI's potential with its accompanying ethical responsibilities.
Artist protection for their work - A tool named Nightshade allows artists to subtly alter the pixels in their artwork, making it problematic for AI training sets that might scrape and use the art without permission. Developed as a countermeasure against unauthorized use by AI companies, Nightshade can disrupt the functioning of image-generating AI models, leading to inaccurate outputs. OpenAI, Meta, Google, and Stability AI face legal challenges from artists over the unauthorized use of their copyrighted content. The team behind Nightshade, led by Professor Ben Zhao, also created Glaze, another tool that masks artists' unique styles to safeguard against AI scraping. These tools aim to shift the power dynamic back to artists, ensuring respect for their intellectual property rights.
UNESCO on Ethics of AI in Education - Ms. Maysoun Chehab from UNESCO discussed the organization's Recommendation on the Ethics of Artificial Intelligence in Education during an appearance on the Al Hurra channel. She emphasized the necessity for a global framework to guide the ethical use of AI in classrooms. Highlighting the importance of maximizing AI benefits and mitigating risks, Chehab stressed the role of schools in adapting to AI's technological advancements. Cooperation among educational stakeholders is essential to ensure students use AI ethically and rationally. Chehab also highlighted the significance of human oversight in AI-based learning, integrating AI with other educational tools, and focusing on students' holistic development.
Dall-e generated moon truck.
You must be logged in in order to post comments. Log In