Saturday, 3 March 2018

U of A Lecture – Demystifying Artificial Intelligence- Part 1



U of A Lecture – Demystifying Artificial Intelligence- Part 1

Part 1

I have split this blog into two parts, as it is over 2000 words in its entirety.  Here’s a link to Part 2:

The AMII Institute

We went to a lecture the other week (Feb 20, 2018) about developments in artificial intelligence.  The talk was put on by the University of Alberta Faculty of Extension, partnering with the Alberta Machine Intelligence Institute (AMII).  The presentation was given by Geoff Kliza, one of the researchers at AMII.

AMII has been around for 15 years or so, though there has been a name change.  It is an academic unit at the U of Alberta, which is involved in cutting edge research, especially in reinforcement learning.  In fact, it is one of the top 3 such institutes, based on publications.  The University of Alberta has a long history of research into artificial intelligence, especially in the gaming area.  For example, it has partnered with Google on reinforcement learning, and a large number of the researchers that were involved with Alpha Go (the program that beat the best human Go players) had connections to the U of A.

The talk was put on by their Applied Research Group.  They are a team that has been put together to partner with industry, government and other organizations, investigating areas where AI could help improve efficiencies and make money.

To some extent, the talk was a pitch to drum up business and let people know about the initiative.  The idea was to explain AI, in general and understandable terms, for a lay audience.  It was to be followed by a seminar day, where interested parties could practice some of the technologies on real-life data and develop a feel for what it can do.  Eventually, this could lead to ongoing partnerships with industry.

I should note that I am a statistician by profession, who works in applied research for a university (things like predicting who will graduate, male/female pay gaps, etc.), so I have a pretty good understanding of data analysis.  But, I have only used the newer machine learning techniques to a limited experimental extent, so bear that in mind as I describe what I heard at the talk, and add some comments of my own.

How Intelligent is Artificial Intelligence (AI)?

There is, of course, a great deal of debate about just what we mean by intelligence.  This can be debated in terms of philosophy, religion, cognitive science, neurological science or practical engineering terms.  But for the purposes of the talk it was defined in operational terms as:

  • Agent perceives environment.
  • Agent calculates appropriate response, depending on data inputs and desired outcome (goal).
  • Agent takes the calculated action. 
Another informal definitions is “AI is whatever hasn’t been done yet” (because is isn’t considered a big deal (i.e. AI) once we have gotten used to the idea that computers can do it).

Some traditional forms of AI and areas of research and development that were mentioned in the talk included:

  • Reasoning or logic (e.g. equations and other symbol manipulations)
  • Knowledge Representation (e.g. expert systems).
  • Planning (logistics and the like).
  • Natural Language Processing NLP (e.g. Apple’s Siri).
  • Translation.
  • Artificial vision.
  • Robotics.

AI Up and Down (History)

Artificial intelligence has a long history, with a lot of ups and downs - in some senses, it goes back to Babbage and his analytical engine.  And certainly AI has been a sort of fever dream of the hopes and fears of human beings in culture for at least that long.  Movies such as Colossus, 2001 or Terminator are prime examples of that.  And lets not forget all of the episodes where Kirk and Spock had to kick some AI in the brain, metaphorically speaking.


In modern times it has had starts and stops, periods of enthusiasm where human-like intelligence is just around the corner, and periods of disillusion, often known as “AI Winters”.

The Gartner Hype Cycle (pictured) is a good encapsulation of this phenomenon.  One suspects that many of the AI techniques are currently at “Peak of Inflated Expectations”, while others have moved on to the Trough of Disillusionment.  Some are probably crawling out of that, via the Slope of Enlightenment to the Plateau of Productivity.



The latter would include things like the Google and Amazon recommendation engines.  Siri and Alexa might be there too.  Self driving cars are probably early in the cycle – I think there will be a lot of issues to work out, as they go through their Trough of Disillusionment.  That probably goes for many of the machine learning type applications, as well.

By the way, “Big Data” seems to have gone through its own Peak of Inflated Expectations, though I am not sure just where it is now.  You don’t hear nearly as much about it, as you did in the recent past.  Though, interestingly, big data has an important role to play in the increased interest in, and usefulness of, AI.

Along with the tidal wave of Big Data, AI has been revivified by improvements in hardware (especially fast GPU chips), intensive development of algorithms (especially variations on artificial neural nets), and investor interest.  The shift of algorithmic focus from expert systems to neural nets has been a major factor, as expert systems are difficult to implement and require experts to provide deep knowledge bases in very specific domains.

On the other hand, neural nets learn by themselves, needing only “ground truth” and algorithms development from people.  Of course, that brings up the problem that their solutions are black box solutions – you can’t really tell why they give the answers that they do.

Link to Part 2:




----------------------------------------------------------------------------------------------------------------
Now that you have read of some cutting edge science, you should consider reading some Science Fiction.  How about a short story, set in the Arctic, with some alien and/or paranormal aspects.  Only 99 cents on Amazon.

The Magnetic Anomaly: A Science Fiction Story

“A geophysical crew went into the Canadian north. There were some regrettable accidents among a few ex-military who had become geophysical contractors after their service in the forces. A young man and young woman went temporarily mad from the stress of seeing that. They imagined things, terrible things. But both are known to have vivid imaginations; we have childhood records to verify that. It was all very sad. That’s the official story.”






“The Zoo Hypothesis”, an Alien Invasion Story


Here’s a story giving a possible scenario for the so-called Zoo Hypothesis, known in Star Trek lore as the Prime Directive.  It’s an explanation sometimes given to account for a mystery in the Search for Intelligent Life, known as The Great Silence, or Fermi’s Paradox.
Basically, Enrico Fermi argued (quite convincingly, to many observers), that there had been ample time for an alien intelligence to colonize the galaxy since its formation, so where are they?  The Zoo Hypotheses says that they are out there, but have cordoned off the Earth from contact, until we are sufficiently evolved or culturally advanced to handle the impact of alien contact.

This story takes a humorous tongue in cheek approach to that explanation.  It also features dogs and sly references to Star Trek.  Talk about man’s Best Friend.

Amazon Canada: https://www.amazon.ca/dp/B076RR1PGD


No comments:

Post a Comment