HaileyburyXAI AI chatbot symbol

UNDERSTANDING AI MICROCOURSE

AI BASICS

Learn about how AI works by understanding some applications of AI in image processing, medicine, and transport. 

Image
Artificial Intelligence (or AI) is a branch of computer science that tries to copy or simulate human intelligence in a computer.
The idea is that computers can be made to perform tasks that usually need human intelligence. This means that humans can be freed up to do other things and that AI should be able to d faster, more cheaply or more accurately.

As you will see in the GENERAL AI microcourse, AI systems are powered by algorithms.

You will learn in that course that there are two basic types of algorithms: those that use simple rules, such as "IF (something happens) THEN (do this)"; and those that are much more complicated and that try to learn about how to solve a problem without being told how to solve the problem. This is called machine learning or, sometimes, deep learning.

You probably know how science fiction movies and books show AI: as some kind of robot in movies like Terminator or 2001: A Space Odyssey. And of course, we have our own digital person, Hailey, who you’ll talk to as part of the AI microcourses.

But AI is not just robots or digital people. There are many other examples of how AI is used. We’ll talk about three of them in this microcourse: images, medicine and transport.

Before we start, you should keep in mind one more thing that we will also talk more about the GENERAL AI microcourse.

That’s the difference between trying to build computers and AI that can think, understand, and act in a way that is exactly the same as a human (what is called ‘strong AI’ or ‘general AI’); and building AI that can do specific tasks (what is called ‘weak AI’ or ‘narrow AI’).

This is a really important idea, and you’ll learn more about this when we talk about ideas like the Turing Test in other microcourses.

In this microcourse, we’re going to look at weak AI - AI used for specific things.

There are lots and lots of examples (and many that you can find out about for yourself), but we’ve chosen three: using AI to process and create images; the use of AI in medicine to help diagnose and treat disease, and using AI to help build intelligent transport.
HaileyburyXAI AI chatbot symbol

AI BASICS: AI and images

IS WHAT YOU SEE REALLY WHAT YOU SEE?
Thanks to algorithms, computers can be taught to process images and to analyze them much more thoroughly than humans can.

Some of the applications of image processing are about recognising faces to confirm someone’s identity or about analysing images to find specific objects or patterns in images.

AI can also be used to improve images so that people can understand them more easily, making it easy to retrieve an image from a large set of images, measuring the size of objects in an image or telling what kinds of things are in an image.

Here’s an example of what AI can do when it is used to process images.

Here you can see classification (what is this?), localisation (where is it?), object detection (what is in this image?) and segmentation (what different things are in this image?) (From Cornell University Computer Vision lectures).
Cornell University Computer Vision lectures image processing example
DETECTING DISEASE
And here’s an example of how AI can use images to scan children’s eyes to potentially detect the early signs of autism.

The technology uses high-resolution camera with software which analyses features like blood vessels in the eye. Click the image to read the article on Reuters
A Hong Kong scientist has developed a method to use machine learning and artificial intelligence to scan retinas of children as young as six to detect early autism
DETECTING EMOTIONS
 
Also, AI can be used to detect your emotions.
 
You may have noticed how our digital person, Hailey, can smile back at you when you smile at her.
 
If you want to learn more about this process, watch this video about emotion recognition in a TED Talk by Kostas Karpouzis.
DEEPFAKES

There’s lots to read about how AI is used in processing images.

You can find all about this by searching for yourself, but we’d like to talk about AI and image processing using an example you may have heard of: deepfakes

To get started, read this article from the New York Times

You’ll read in this article how a visual effects artist worked with a Tom Cruise impersonator to create videos that really look like Tom Cruise (you can see them here on TikTok)

You’ll also read about the website MyHeritage, which uses AI to digitally animate old photographs to create videos that show people moving their heads and even smiling


Here is an example from the MyHeritage genealogy site showing the Rev. Dr Martin Luther King Jr.
The example from MyHeritage shows how this kind of technology can bring pleasure to people who would like to see how people from the past may have looked in real life. In movies, it could be used to make actors younger or to change the shape of mouths to make it seem like actors are speaking another language.

Of course, as you will learn in the AI ETHICS microcourse, this technology can be dangerous as well as entertaining. It might be used to insert people into images or videos who were never originally there, for example, or to create videos of people committing crimes they never committed.

Want to learn more about deepfakes and how they can be used for good or evil? Read this New York Times article. 
SOME QUESTIONS TO CONSIDER
 
Do you think that deepfakes should be banned? Would that be possible?
 
Can you name three positive uses of deepfakes that would make life easier?
 
Can you name three negative uses of deepfakes that might cause you a problem in your daily life?
 
Do you think you have ever believed that a deepfake was real?
 
Read this article.  How would you feel about your emotions being monitored by AI as you learn?
 
You might want to consider deepfakes as a topic for THE AI EXPLORATORY.
HaileyburyXAI AI chatbot symbol

AI BASICS: AI and MEDICINE

THE DOCTOR/AI WILL SEE YOU NOW
How can machine learning benefit medicine? Watch this introduction to machine learning from Harvard University.
AI IS SOMETIMES BETTER THAN A HUMAN
 
In 2018, researchers at Seoul National University Hospital developed an AI algorithm called DLAD (Deep Learning based Automatic Detection).
 
What it did was to analyze X-rays of patients’ chests and find potential cancers.
 
The algorithm’s performance was compared to doctor’s own analysis of the X-rays and researchers found that it was better than 17 out of 18 doctors.
As we have seen, AI has become extremely efficient and accurate at categorising images.
 
In our GENERAL AI microcourse, you will be able to have a machine identify whether an image you upload is a dog or a cat in a matter of minutes.
 
Of course humans can do this from a very early age so this may not seem so impressive.
 
However, when it comes to medicine, doctors take years to build up expertise in looking at an image and diagnosing a disease. An untrained person would not know what to look for in such specialised cases - such as detecting skin cancer and identifying features in a CT scan that could indicate lung cancer. 
 
To be useful, AI does not need to be perfect, just better than a human at diagnosing diseases.
An image of a chest X-ray of where the DLAD algorithm detected potential cancerous cells.
And AI has also been shown to be as good as, if not better than doctors at detecting breast cancer.
 
Researchers from Google Health and Imperial College London designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm outperformed six radiologists in reading the X-rays.
 
Read this article from the BBC that summarises the research into using AI for diagnosis. 
MENTAL HEALTH AND AI
But it’s not just reading X-rays where AI can be useful.

A company called WoeBot uses AI in the form of a chatbot that helps people with mental health problems. People are able to chat to Woebot every day and it learns about how they are thinking and suggests things do to improve their mood.

Studies of Woebot suggest that it can be effective in treating depression.
AI HELPS WITH EPILEPSY
 
There are lots of applications of AI in medicine and healthcare.
 
Here’s another one. People who suffer from epileptic seizures can now use a device that detects when they happen and let others know when they need help.
 
The Embrace is worn by those who suffer from epilepsy and it uses machine learning - where AI collects data and analyses it for patterns.
SOME QUESTIONS TO CONSIDER
 
Would you rather have an experienced professional diagnose your medical condition or a computer?
 
Do you worry that computers will replace experts in medicine?
 
Do you think humans and computers can work together to improve healthcare outcomes?
 
If a computer fails to detect a disease, who is responsible?
 
Finally, watch this video from World Chess Champion Gary Kasparov.
 
 
HaileyburyXAI AI chatbot symbol

AI BASICS: AI and INTELLIGENT TRANSPORT

ARE WE THERE YET?
Remember when you were little and always asked “are we there yet?” on every trip?
 
Now AI is being used in all kinds of transport to help make travelling faster, easier and hopefully safer.
 
AI is used in lots of forms of transport. For example, in aviation, where intelligent systems help fly planes, to ensure that planes are where they should be in an airport, to make fueling planes more efficient, or moving people around airports. And, as we saw earlier, image processing can be used to help with security risks in airports by identifying anyone who may pose a threat.
 
But one of the biggest applications of AI is in driving.  
 
AI is already used in lots of areas of road transport. For example, AI is used to help analyse the pattern of traffic to provide drivers with information on the fastest route or to control traffic signals and traffic lights to make travelling faster.
 
But the real aim is that AI takes over specific aspects of driving, eventually taking over the process of driving altogether in what are called self-driving cars.
 
Start by watching this segment of the BBC Click programme on self-driving cars. 
AI TECHNOLOGIES
 
Right now, Tesla and other car companies use AI in what are called advanced driver assistance systems that provide steering, braking and acceleration support under limited circumstances.
 
Telsas use data from sensors such as GPS (which provides location information), cameras (that allow the car to see), radar (which allows the car to detect where objects are and how fast they are moving) and lidar (which builds a map of the world around the car by shooting out millions of light pulses every second and measuring how long they take to come back).
 
All of the data created by these sensors is analysed in real time by clever AI algorithms that then help control the car.
 
One of these AI systems is called a neural network. It’s a way of analysing lots of data and learning from it, for example about what kinds of objects the car might come across (cyclists, pedestrians, trucks) and remembering them for next time.
 
Other algorithms then try and make intelligent decisions based on all of this data (for example, what a pedestrian at a crossing might do).
 
WILL CARS BE SELF-DRIVING SOON?
 
Bur how close are we to having cars that don’t need drivers?
 
According to some people, it may be years away, as the current cars being tested still make mistakes. There have been lots of reports of crashes caused by the latest AI based cars.
 

And, as you will learn in the AI ETHICS microcourse, it’s hard to know how to deal with questions about what decisions AI can and should make.

What should a self-driving car should do when faced with a life-versus-life situation, for example when an accident forces a choice between injuring one person, or several?
 
Should cars make their own decisions, or should they be programmed with the wishes or their drivers? And if there is an accident, whose fault is it - the car, the owner or the people who made the car? These are all questions that need to be thought through.
 
When cars are completely self-driving we will have reached what is called level 5 automation. Then, none of the driving will be done by a human, who will just sit back and relax, or sleep or watch YouTube.
 
Right now we are probably at level 3, or maybe level 4 - where drivers still need to be in charge, although they get help from AI systems with braking, navigation, parking or in some cases steering on highways.
 
One thing we do know is that, over the next few years, AI will become much more a part of cars and driving.
 
Your first car may be a self-driving car. So you can watch YouTube.
SOME QUESTIONS TO CONSIDER
 
Would you trust a complete self-driving car?
 
If you could program your car’s AI to make choices about what it does in an accident, would you?
 
What other changes might happen when fully self-driving cars arrive? To cities? To roads? To other kinds of transport?
 
If your car collects data about how you drive, or where you go, do you think you should own that data?
 

 

AI MICROCOURSES HOME

ALL AI MICROCOURSES

START NOW

DESIGNING CONVERSATIONS MICROCOURSE

WHAT IS A CONVERSATION?

START NOW

DESIGNING CONVERSATIONS MICROCOURSE

CONVERSATION DESIGN

START NOW

DESIGNING CONVERSATIONS MICROCOURSE

CONVERSATION DESIGN TOOLS

START NOW

UNDERSTANDING AI MICROCOURSE

AI ETHICS  

START NOW

UNDERSTANDING AI MICROCOURSE

AI BASICS  

COMING SOON

UNDERSTANDING AI MICROCOURSE

GENERAL AI

START NOW

BUILDING AI MICROCOURSE

DIALOGFLOW

START NOW

BUILDING AI MICROCOURSE

DIGITAL PEOPLE

START NOW

BUILDING AI MICROCOURSE

DISCORDBOTS

START NOW

AI CHALLENGE MICROCOURSE

LEARNING CHALLENGE

START NOW

AI CHALLENGE MICROCOURSE

THE AI EXPLORATORY