How and under what conditions is it possible for an intelligent machine to learn? To address this question, let’s start with a definition of machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.”
What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability.
Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications,we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications:
  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.
In essence machine learning incorporates four essential elements.
  1. Representation: The intelligent machine must be able to assimilate data (input) and transform it in a way that makes it useful for a specific algorithm.
  2. Generalization: The intelligent machine must be able to accurately map unseen data to similar data in the learning data set.
  3. Algorithm selection: After generalization the intelligent machine must choose and/or combine algorithms to make a computation (such as a decision or an evaluation).
  4. Feedback: After a computation, the intelligent machine must use feedback (such as a reward or punishment) to improve its ability to perform steps 1 through 3 above.
Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Machine learning has come a long way since the 1972 introduction of Pong, the first game developed by Atari Inc. Today’s computer games are incredibly realistic, and the graphics are similar to watching a movie. Few of us can win a chess game on our computer or smartphone unless we set the difficulty level to low. In general machine learning appears to be accelerating, even faster than the field of AI as a whole. We may, however, see a bootstrap effect, in which machine learning results in highly intelligent agents that accelerate the development of artificial general intelligence, but there is more to the human mind than intelligence. One of the most important characteristics of our humanity is our ability to feel human emotions.

This raises an important question. When will computers be capable of feeling human emotions? A new science is emerging to address how to develop and program computers to be capable of simulating and eventually feeling human emotions. This new science is termed “affective computing.”  We will discuss affective computing in a future post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte