Click here to Skip to main content
15,881,709 members
Articles / Artificial Intelligence / Machine Learning

Artificial Intelligence is an Engineering Problem, Not Magic!

Rate me:
Please Sign up or sign in to vote.
4.94/5 (16 votes)
3 Jan 2019CPOL16 min read 15.1K   156   34   1
Part one of a series to demystify and democratize the magic (not) of AI

Introduction

I don't know about you. I've never been comfortable with the use of the term Artificial Intelligence (AI) by mainstream media, and the way the ubiquitous sales and marketing folk have recently begin to fling it around only add to my general angst over it all. When we talk about Artificial Intelligence, to many, this conjures up thoughts of killer robots, a dystopian future and being enslaved to mega corporations (gulp, is that bit already here?!). Apart from all that, there is a general misconception that you need a PhD to be able to use these technologies, let alone understand them. To all of the above, I simply say - balderdash good sir! balderdash!

Image 1

I have recently taken a deep dive into the field of AI by involving myself (yet again) in some further university study. From an engineering point of view with no serious mathematics background, it's very encouraging to see how accessible this field can be for folks like myself, dealing with applied technology solutions on a daily basis. This is to be the first of a series of articles I intend to write on the subject, a brief introduction. The aim is to build up knowledge of different AI areas and give just enough background to enable you to understand how things work, and how to implement them on a practical level. If you have a reasonable grasp of the fundamentals, there is no reason why you cannot get to a position quickly where you will:

  • know how to approach different engineering problems with AI solutions
  • identify which category of AI will be most suitable for a given problem (there are some awesome cheat-sheets for this!)
  • know what libraries to use, and what you need to chain together to build out a solid professional solution.

Before we get stuck in, let's draw a line in the sand regarding AI .... the type of AI that we have nowadays, that does we must admit some wonderful (yet limited) things, is referred to as 'Narrow AI'. This means it is capable of doing something in one particular domain extremely well, perhaps better and faster than humans, but that's where it starts, and where it stops.

Image 2

The AI we have today is not the all knowing new overlord intent on world domination many would have us believe, and unless you work in an industry that is not reasonably knowledge based, your job is pretty much safe for the moment (more on that later).

When we want to refer to the oh very scary omniscient Overlord, we are talking about a different kind of AI - that type is 'General AI', and according to some of the very best minds on our little planet, we are quite a long ways from that yet.

So What Then, Actually Is Artificial Intelligence?

“Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit characteristics we associate with intelligence in human behavior – understanding language, learning, reasoning, solving problems, and so on.” (Barr & Feigenbaum, 1981)

AI contains many sub-fields, some broad, some miles deep and highly specialized. To put it in context, you could say that Machine Learning is to Artificial Intelligence, what calculus is to Mathematics.

Some of the different areas in AI are as follows:

  • Machine Learning
  • Neural Networks
  • Evolutionary Computation
  • Vision
  • Robotics / IoT
  • Expert Systems
  • Speech Processing
  • Natural Language Processing
  • Planning

Like most scientific fields, there are both theoretical and applied areas. Theory seeks to examine and identify ideas about knowledge representation, learning, rule systems, search, and so on, explaining various sorts of real intelligence. On the other hand, the applied engineering focus is to solve real world problems using AI techniques in the areas of Natural Language Processing, Rule & Decisions systems, Search, etc.

I think like myself, the majority of readers of this site would be more focused on the practical application side of AI than the theoretical. Now here's the thing ... for the majority of things that you will want to do as an engineer with AI today, most of the theory part has already been worked out by the boffins - and that really is your key to success in the area. You *don't* need a PhD to implement AI solutions, but like any engineering problem, you do need to understand enough to know what tools you should use to solve what type of problem, and then how to use those tools.

As developers, when we do our normal bread and butter work, we go through a process of analysis, design, build, test and deployment. Putting together an AI system is exactly the same - just using tools we may not be familiar with yet. When we send data to a printer, hook into a web-service, write a mobile app, we use APIs - in AI, we do exactly the same ... think of it as what you are used to, with just a different endpoint as it were!

It All Boils Down to Numbers....

Pretty much everything we do in AI is done using mathematics, but *don't panic!!* ... even if you (almost) flunked Math 101 in school, as a developer, you can still use the pre-built libraries/APIs/etc. provided by the folks who are good at that kind of thing. You don't actually need to understand how a CPU or Memory allocation works to program quite complex things - in fact, most of us do it every day without thinking about it. In the same way, the abstraction that is getting better by the day, removes the complexity and nitty gritty of AI and allows mere mortal engineers like ourselves to wield algorithms like we were young Harry Potter himself!

Let's take an example....

Tania goes to an online store and over a period of time, purchases ten books. Eight of the books are about historical musical instruments (niche, I know!), the other two are about cooking. Abhishek shops on the same online store. He purchases two books on historical musical instruments, and one book about cooking in olden times. The next time Tania visits the shop, guess what, we recommend the same book Abhishek purchased about cooking in olden times - the reason is simply that the crossover (inner join?!) between these two customers is cooking, and musical instruments - therefore we can make a good guess that Tania will also like books about olden/historical booking techniques. The way we do this, is we create a 'model' (think of it as a cut-out template), and when we get new data, we see does it fit into the model, and if not fully, then how well does it fit. If it's within a certain limit of predefined parameters, then we can predict accuracy based on how close the data is to the model/template.

In general, we convert letters to numbers and use these for our calculations. I'll go into more of the reasons for that in-depth in another article.

Another example ...

If we were selling property, and had a list of property prices and the floor area of properties sold previously, we could make a pretty good guess of (1) how much money it would cost you to buy a property with a certain floor area (2) conversely, how much floor area you could expect to get for a given sum of money.

Let's take a list of floor area Vs prices achieved - they are pretty basic:

Image 3

If we graph these, we can see a trend - clearly in this uber simple example, the more floor space in the property, the higher the price.

Image 4

We can add a trend line to see this a bit more clearly....

Image 5

Now to demonstrate how we can make our prediction, let's overlay another line (green) ....

Image 6

If we have a property with about 1250 of floor space, we can reasonably expect to get perhaps 27,000 for it on the market. On the other hand, we could say if we have 27,000 in our little fist, we could use it to purchase 1250 of floor space. Visually, that's easy to see - under the hood, using an machine learning algorithm, we would work it out by using distance measurements between elements ont he X/Y axis. This is, of course, a very simplistic example - now imagine how this might work with multiple variables and elements in the picture .... location of the property, number of rooms, number of nearby schools, etc. This is where we get involved in things like matrix multiplication and will look at some crazy black-box things like Neural Networks/Deep-Learning.

We are going to look a lot more at numbers as we progress through this series of articles - it's not difficult - if I can do it, then so can you. :)

A Look at Some AI Magic Fields

Let's take some of the areas of AI and see how they work - remember, this is software engineering y'all, not Harry Potter land!

Natural Language Processing

Commonly referred to as 'NLP', this area works on trying to make sense out of human language and the words that make up language. Language can be a messy thing, and the fact we have quite a few only compounds the issue. NLP is used in many areas you are already familiar with, if not in practice then in daily use on the web.

Some examples would be:

  • Sentiment Analysis

    When we look at these two sentences, we can see one is meant as a positive sentiment, whereas the other is sarcastic/negative:

    'Thanks for the beer Bob, it was awesome!'

    versus

    'Thanks for the warm beer Bob, it was.... different?'

  • Chat-Bot Question Parsing

    When you interact with a self-service online bot, there is a lot of work that goes on behind the scenes parsing what you say into an understanding of your request, generating a suitable response, and creating an actionable workflow. This could involve both NLP and AI based planning.

    'Can you advise a pair of shoes to match with a long blue shirt with butterfly print I am wearing for an event next Tuesday please, and let me know what store I can try them on near me?'

  • Online Translation

    Not only does online translation have to take care of the parsing and translation part, but also recognizing context and sentiment, both of which have an impact on valid translation.

The two images below show an example of what happens under the hood for some NLP parsing - bear in mind that in most cases, we convert words to numbers and analyze these representations - more on this later.

Like all of these things, it makes sense and is not difficult once you know how!

Image 7

Image 8

Reference: https://stanfordnlp.github.io/CoreNLP/

Facial Identification (Vision)

When an AI 'looks' at Bob, it's not seeing Bob as you and I do - a delightful green mascot ..... instead, the following (roughly) happens:

  • As it expects humans to have one of everything down the middle, and two of everything down the side, the AI first analyses the image and picks certain starting points - normally the eyes. (img: 2, below), and at the same time, isolates the facial area as that's all it is concerned with and converts it to gray-scale.
  • Once it has a starting point, it then works its way around the image, picking points of contrast that it expect to be *more or less* in an approximate location in relation to the eyes. It works through these points, finding the mouth, chin, cheekbones, nose, etc., until it has a particular 'map' of the main points on the face (img: 4, below)
  • The 'map' represents a set of points in the image space, along with the distances between them. Other data metrics will be added based on image contrast - these cna help identify shadows that are relative contrast across the face, that will generally indicate shadows/raised points/curves, etc..
  • The data gathered for the 'map' is what is then used in the future to identify a person a second time ... i.e.,: take a 'map' again, then search for the nearest match to it in the database.

Image 9

Reference: https://www.eff.org/wp/law-enforcement-use-face-recognition

Scanning images and trying to identify them can be difficult, for this reason, a number of techniques are used to speed things up. One of those, for example, is to lower the resolution of the image so there is less 'noise' and only move to a higher resolution if we get a match on a lower resolution. The image below which is cut from a flow-chart explaining a deep recurrent network model shows this technique being used- look at it from left to right.

Image 10

Reference: https://sites.wp.odu.edu/VisionLab/research/generalized-object-recognition-using-deep-recurrent-models/

Machine Learning

Another of the categories within AI is that of Machine Learning. We are all used to rule based systems; these are effectively comprised of one or more simple or complex 'IF THEN | BRANCH' logic. One of the things about rule based systems is they don't tend to scale very well, and can get very complex when decision making nuances enter the mix. Machine learning is a vast improvement on rules based systems - rather than having to make up new rules each time, you need to give a different option for the system to consider, you give it new DATA, and say - "here's something new you haven't seen before, but its the same thing as X....". Whereas rule based systems take the approach of 'COMMAND -> ACTION', Machine Learning takes a different route. At its core, Machine works by taking data, comparing it to a 'model' (of something), and if the model matches, then it takes an action.

Let's look at an example. Let's say we defined a 'blueberry muffin' as light in color, small, roundish on top, with dark bits (the blueberries) mixed throughout (yum yum, me like cake!!) .... ok, that seems easy - now, keeping that description of a muffin in mind, look at the picture below......

Image 11

So, not as easy as one might think!

Machine Learning Categories

There are three broad categories in Machine Learning. These are Supervised learning, Unsupervised learning and Reinforcement learning, and each have their place/things they are good at.

Supervised Learning

This is the easiest to comprehend. We give the algorithm a series of examples and say what the examples are.

This is a green ball, this is another green ball, this is a red ball, this is another green ball, this is a blue ball,... etc.

When we present the algorithm with a new ball, it already has numerous examples to refer to, and can thus say 'it's a blue ball!', or 'shucks, I'm stumped!' perhaps it it hasn't seen a particular example. If it gets stumped, then you simply tell it what this new ball is (it's black!) and the next time around, it will be able to recognize the black ball as well as the others. So as it says on the tin, we are supervising the algorithm, training it when we need to.

Un-supervised Learning

This kind of algorithm can be seen often in cluster identification. This is where we give the algorithm a load of data, and we tell the algorithm what parts of the data are interesting (or sometimes not, we can also 'play blind'). We might say for example - here is a big CSV file of housing purchase data for the greater London area for the past five years, cluster the data into regions with clusters of properties in certain price brackets.

Reinforcement Learning

In this last area, instead of providing the algorithm with examples (as in supervised learning), we instead provide it with a method it can use to examine and quantify its own performance in learning by giving it a reward indicator of some kind. If we wanted to have the algorithm learn how to play a game, we might start the game by giving it 5 points, and if it does something correct, give it another point (yea, I'm at 6!!), but get it wrong, and we deduct points (how sad).

We will go much deeper and look at lots of examples you can play with yourself in further articles.

Mix 'n Match - You Already Do this...

In the same way as we link different technologies together to provide an end user solution in our normal engineering work, we do the same thing in AI. If we consider the Robot I proposed in my 'Lets build a robot' article, it moves around and is aware of its environment using a combination of different AIs:

  • Facial recognition - used for identifying people
  • Visual object detection - used as part of navigation
  • Voice recognition - used to translate voice commands into logic instructions
  • Sensor data analysis - used as part of navigation and planning

On their own, these technologies/AIs are interesting and useful, chained together however, they present something more powerful, more than the sum of its parts.

Have a look at this image from Standard to see NLP alongside Image Recognition:

Image 12

Reference: https://nlp.stanford.edu/projects/DeepLearningInNaturalLanguageProcessing.shtml

Easy Access - Getting Started

There are new AI systems and services coming out all the time. From an applied engineering point of view, the most interesting at those that abstract away the complexity, and give the technology consumer (that's you and me) the ability to harness the AI in question in as frictionless manner as possible. Microsoft has a long history of providing excellent tooling to developers to enable them to use complex technology in an easy way, and this is continued in the Azure cloud. To get started with AI, I always recommend folks to take a look at cognitive services - it's an easy way to get your hands dirty without having to undertake a massive learning curve to get to AI 'Hello World'. Check out specific links on the images below for more detailed information on each:

Image 13
Image 14
Image 15
Image 16
Image 17
 

Attachments/Links

Two things:

  1. I attach an extremely useful and well prepared AI Cheat Sheet form Microsoft - it explains a number of what we have talked about thus far extremely well and with great examples - please download it!
  2. Here is an awesome, wonderful, amazing, super duper, ***must visit*** visual representation of some AI/Data Science concepts - its an absolutely must visit on a frequent basis to get the concepts into your mind!

Coming Next...

In the next article, we will examine:

  • when does an algorithm become an AI?
  • where ethics come into the equation
  • how you can start integrating basic AI into your solutions quickly, wow your boss, make new friends and impress your enemies :)

Learning Material

If you are the reading sort, I can *highly* recommend the following short books - they are both extremely well written, clear, and easy to understand (even if you shudder at the thought of math!)

Fast Reads / The Basics

  • Books - NumSense - Data science for layman. Amazon US/UK/IN
  • Books - Machine Learning for Absolute Beginners. Amazon US/UK/IN
  • Books - Data Smart - Learn the basics using only excel! Amazon US/UK/IN

History

  • 31st December, 2018: Version 1

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Chief Technology Officer SocialVoice.AI
Ireland Ireland
Allen is CTO of SocialVoice (https://www.socialvoice.ai), where his company analyses video data at scale and gives Global Brands Knowledge, Insights and Actions never seen before! Allen is a chartered engineer, a Fellow of the British Computing Society, a Microsoft mvp and Regional Director, and C-Sharp Corner Community Adviser and MVP. His core technology interests are BigData, IoT and Machine Learning.

When not chained to his desk he can be found fixing broken things, playing music very badly or trying to shape things out of wood. He currently completing a PhD in AI and is also a ball throwing slave for his dogs.

Comments and Discussions

 
QuestionCould not find Data Smart - Learn the basics using only excel on Amazon! Pin
Member 86970689-Jan-19 10:38
professionalMember 86970689-Jan-19 10:38 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.