Upskilling to see through AI hallucinations

Upskilling to see through AI hallucinations

From the sincere to the ridiculous, an AI hallucination may not immediately appear as a threat, so everyone needs to know how to avoid making what could be a costly error.

Artificial Intelligence (AI) can give clarity to complex topics by breaking down huge datasets and building a narrative around figures and large-scale information. For professionals in roles that consistently deal with significant amounts of data it can be a real gamechanger, as it allows for an optimised workday and the reallocation of time to tasks of higher value.

But AI comes with a caveat, in that it is only ever as strong or as trustworthy as the person who built it and the individual who decided how to train it. AI hallucinations, which are nonsensical, inaccurate or misleading answers, delivered as a generated response, are a phenomenon that occurs when a large language model utilises information from an uncredited, even absurd source, presenting it as truthful. 

While it can sometimes be incredibly obvious that a ‘fact’ provided by an AI prompt is fictitious, it may not always be clear and often people in jobs that depend on accuracy and transparency could be facing serious repercussions if a mistake slips under the radar. So, how can professionals upskill to better recognise an AI hallucination?

Consider formal education 

It is fair to say that for many people in the workforce, particularly younger generations such as GenZ and Millennials, much of what we know about technology and modern tools we learned via exposure. There is a lot to be said for learning on the job, however, formal education can also give professionals a leg up, as well as prepare them for new and emerging challenges posed by a changing landscape.

More often than not, mistakes are a byproduct of a lack of training, so to ensure that you are in the best position to recognise a situation in which an AI hallucination is a possibility, why not look into an online course, external upskilling or webinar opportunities? 

Accredited edtech organisations, such as Coursera, Khan Academy and LinkedIn Learning, often have a range of modules, sometimes free, to appeal to almost every lifestyle. Additionally, if you want something a little less casual, it could be an opportunity to look into engaging with third-level education, night classes or micro-credentials.

Think critically

A rule of thumb when dealing with advanced technologies is, where possible, don’t go into anything blind or don’t accept anything without question. AI hallucinations can be deceiving and a professional will need critical-thinking skills to determine the veracity of the information.

Working on your critical-thinking skills will involve more in depth understanding of how to source, analyse and incorporate credible sources into your overall task. Fact-checking tools from reputable sites can be of assistance, especially until you are more confident in your ability to recognise a legitimate resource.

Additionally, professionals should be aware of their own biases and any potential blind spots they may have, to ensure that their own experiences and opinions are not presented as fact. 

Prompt engineering

While there is the common misconception that AI is almost infallible, with the potential to answer any question you could think of, nothing could be further from the truth. Not only does AI generate answers based on its learnings from human-designed machines, it also is answering the question in relation to how you phrased it, which can be a contextual nightmare if you lack skill in that area.

Upskilling in prompt engineering gives users the best chance of phrasing themselves as they meant to and can be achieved by being ultra specific, brief and accurate. Exclude superfluous details and if you don’t fully understand the reply or if you think it could be improved make sure that you ask follow-up questions until there is no ambiguity. 

Don’t be vague or biased and keep workshopping your question until you are confident that it is strong. Additionally, if the answer suggests something as fact, make sure that you ask it to provide the source from which it has pulled the information, so you can confirm its authenticity. The more specific you are, the less room the model actually has to interpret what you have said or to create a hallucination.

So there you go, three excellent ways to ensure that the next time you engage with AI-generated materials, you have the skills to see past the smoke and mirrors. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like