ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to ITP News

Not another ChatGPT blog - this one has ideas you can use

Victoria MacLennan. 10 February 2023, 1:37 pm

I think it’s fair to say the race to own the user interactive AI space is heating up, free vs paid access to AI is poised to become the new frontier of Digital Equity and the role of these emerging technologies in education is vexing parents and educators alike. With this as a backdrop ITP members got together earlier this week to kick of a discussion on what we know now that ChatGPT is out in the market (and Bard has been partially launched). It was a great discussion and is clearly only just the start of our discourse on AI becoming mainstream. 

Here is a summary of our reckons. 

AI Chatbots.jpg

What should we be concerned about? 

As you can imaging there was a very long list of concerns, some based on the current limitations, early version and limited dataset of ChatGPT. Below I have stuck to the larger issues, each a topic in it’s own right. 

Authority and confidence - the output of ChatGPT for instance is written in authoritative language, but we have all seen results that are just untrue already. The risk of the answers according to the AI becoming accepted truth is incredibly high. Education for users will be required. You need to validate what the tool is saying because it can very confidently be wrong. 

Copyright - unintentional copyright breaches through utilising these tools? The ease at which copyright could be breached. Are the creators of AI algorithms liable? A big can of worms

Legislation - we universally agreed AI tools should not be used in the development of legislation, the risk of nuanced language creeping in, the risk of bias - we noted development of legislation is already a black art to most of us but didn’t want to see these tools taking a role. 

Deep Fake / Fraud - the ability to achieve this is just getting easier as AI tech develops, the ability for countries to keep up with legal frameworks to protect citizens is almost impossible. This is one woth a whole other session. 

Bias - we don’t know what bias is built into the algorithms eg: in the interest of ensuring results are curated to exclude hate speech the programmers are injecting their own definitions and bias on hate speech into the code. How do we ensure transparency of algorithms? 

Referencing / sources / providence - with the amalgamation of very large datasets the ability to trace the source or providence of a result or have the tools return references becomes challenging, results will become augmented over time so we entered a discussion on the use of blockchain like solutions to overcome this risk - again a whole other topic. 

Misinformation / Disinformation / Malformation - it almost doesn’t need expanding, the possibility of an AI engine being trained through input from users (or bots) to become a misogynistic, racist, homophobic, biased machine could be all too easy to achieve. 

US bias / voice - using ChatGPT as an example the results are very US based, the language is American English … another can of worms. 

Patents - several governments have already come out and said AI technology cannot be used in the development of patents - this is a great explainer and provides an Aotearoa NZ context. 

What should we be embracing?

The group pointed out we already utilise AI technologies in a range of SAAS tools eg: Grammarly (which was universally agreed to be a great example) and offered a list of other reasons we should be embracing this:

  • With a user interface, like ChatGPT offers, it opens the door to enable more people to access advanced technologies and their world view to be heard
  • The self learning nature of the tools. Eg: ask ChatGPT to write something about itself, and critique it’s own output, it recognises it’s faults and challenges
  • A great English language editing tool
  • Drawing AI apps like DALL-E provide impressive visualisations opening this space up to everyday users
  • As the technology evolves it will learn to provide results in the users own voice
  • Great for prompting ideas
  • Great for getting started on a project
  • Great for learning critical thinking 

What are the Possible Use Cases?

There were many, here are some I hadn’t thought of: 

  • Explain concepts in age appropriate results eg: explain reproduction for a 5 year old
  • Instructions on how to do things eg: how to install python, combining all of the many many sites out there with subtly different instructions
  • Editing documents and emails
  • Getting started on writing things, on code development etc etc
  • For Interns to learn about different coding languages and constructs, refactoring and reviewing / validating it’s output
  • Supporting research processes with access to a wider library of results
  • Recipe suggestions based on the set of ingredients you hold in your fridge

Other thoughts that arose 

I was in awe of the group of people who joined our call, academics, lawyers, architects, developers, business analysts, consultants - amazing minds. Here are some of the other insights noted from the call. 

One theme we delved into came off the back of a factious comment that “when we are all driving Tesla’s there will be no accidents on the roads”, where is AI technology taking us social engineering wise? 

We also spoke extensively on AI tools and the education system, I think that topic warrants it’s own blog so will bring you those insights next week. 

If you are an ITP member I intend to run deeper dive discussions on the larger themes here. It’s a great way for all of us to learn and develop our views / understanding of the technology as it emerges. 

What’s next though do you think? Vic 


Comments

You must be logged in in order to post comments. Log In


Web Development by The Logic Studio