Book Description: This book covers the field of machine learning, which is the study of . I would like to thank Joan Mitchell for creating the index for the book. I. A curated set of resources for data science, machine learning, artificial intelligence (AI), data and text analytics, data visualization, big data, and more. PDF Drive is your search engine for PDF files. As of today we have 78,, eBooks for you to download for free. No annoying ads, no download limits, enjoy .
|Language:||English, French, Portuguese|
|Genre:||Politics & Laws|
|ePub File Size:||28.82 MB|
|PDF File Size:||13.77 MB|
|Distribution:||Free* [*Sign up for free]|
Machine Learning Tom Mitchell Pdf next post Macmillan Science Library Animal Sciences Vol 1(s) Pdf. Back to top. mobile desktop. Department Head, Machine Learning Department, Carnegie Mellon University, Machine Learning: A Guide to Current Research, Mitchell, Carbonell, and. PDF | 3+ hours read | This book reflects the expansion of machine learning At present, the field of machine learning is organized around three .. kinds of models from examples (Michalski, Carbonell, and Mitchell ).
When V,,ain b - e b is positive i. This will raisethe value of? Notice that if the value of some featurexi is zero, then its weight is not altered regardless of the error, so that the onlyweights updated are those whose features actually occur on the training exampleboard. These four modules, summarized in Figure 1. It takes an instance of a new problem new game as input andproduces a trace of its solution game history as output.
Therefore, we expectits performance to improve as this evaluation function becomes increasinglyaccurate.
As shown in thediagram, each training example in this case corresponds to some game statein the trace, along with an estimate Vtrai, of the target function value for thisexample. In our example, the Critic corresponds to the training rule givenby Equation 1. The Generalizer takes as input the training examples and produces an outputhypothesis that is its estimate of the target function. It generalizes from thespecific training examples, hypothesizing a general function that covers theseexamples and other cases beyond the training examples.
In our example, theGeneralizer corresponds to the LMS algorithm, and the output hypothesis isthe function f described by the learned weights wo,. The Experiment Generator takes as input the current hypothesis currentlylearned function and outputs a new problem i.
Its role is to pick new practice problems thatwill maximize the learning rate of the overall system.
In our example, theExperiment Generator follows a very simple strategy: It always proposes thesame initial game board to begin a new game. More sophisticated strategies could involve creating board positions designed to explore particular regionsof the state space. Together, the design choices we made for our checkers program producespecific instantiations for the performance system, critic; generalizer, and experimentgenerator. Many machine learning systems can-be usefully characterized interms of these four generic modules.
The sequence of design choices made for the checkers program is summarizedin Figure 1. These design choices have constrained the learning task in anumber of ways.
We have restricted the type of knowledge that can be acquiredto a single linear evaluation function. Furthermore, we have constrained this evaluationfunction to depend on only the six specific board features provided. If not, then thebest we can hope for is that it will learn a good approximation, since a programcan certainly never learn anything that it cannot at least represent.
Let us suppose that a good approximation to the true V function can, in fact,be represented in this form. The question then arises as to whether this learningtechnique is guaranteed to find one. Chapter 13 provides a theoretical analysisshowing that under rather restrictive assumptions, variations on this approachdo indeed converge to the desired evaluation function for certain types of searchproblems.
Fortunately, practical experience indicates that this approach to learningevaluation functions is often successful, even outside the range of situations forwhich such guarantees can be proven. Would the program we have designed be able to learn well enough to beatthe human checkers world champion?
In part, this is because thelinear function representation for? However, given a more sophisticated representation forthe target function, this general approach can, in fact, be quite successful.
Forexample, Tesauro , reports a similar design for a program that learnsto play the game of backgammon, by learning a very similar evaluation functionover states of the game.
His program represents the learned evaluation functionusing an artificial neural network that considers the complete description of theboard state rather than a subset of board features. After training on over one millionself-generated training games, his program was able to play very competitivelywith top-ranked human backgammon players. Of course we could have designed many alternative algorithms for thischeckers learning task.
One might, for example, simply store the given trainingexamples, then try to find the "closest" stored situation to match any new situation nearest neighbor algorithm, Chapter 8. Or we might generate a large number ofcandidate checkers programs and allow them to play against each other, keepingonly the most successful programs and further elaborating or mutating thesein a kind of simulated evolution genetic algorithms, Chapter 9.
Humans seemto follow yet a different approach to learning strategies, in which they analyze,or explain to themselves, the reasons underlying specific successes and failuresencountered during play explanation-based learning, Chapter Our design issimply one of many, presented here to ground our discussion of the decisions thatmust go into designing a learning method for a specific class of tasks.
For example, consider the space ofhypotheses that could in principle be output by the above checkers learner. Thishypothesis space consists of all evaluation functions that can be represented bysome choice of values for the weights wo through w6.
The learner's task is thus tosearch through this vast space to locate the hypothesis that is most consistent with the available training examples. The LMS algorithm for fitting weights achievesthis goal by iteratively tuning the weights, adding a correction to each weighteach time the hypothesized evaluation function predicts a value that differs fromthe training value. This algorithm works well when the hypothesis representationconsidered by the learner defines a continuously parameterized space of potentialhypotheses.
Many of the chapters in this book present algorithms that search a hypothesisspace defined by some underlying representation e. These different hypothesisrepresentations are appropriate for learning different kinds of target functions.
Foreach of these hypothesis representations, the corresponding learning algorithmtakes advantage of a different underlying structure to organize the search throughthe hypothesis space. Throughout this book we will return to this perspective of learning as asearch problem in order to characterize learning methods by their search strategiesand by the underlying structure of the search spaces they explore. We will alsofind this viewpoint useful in formally analyzing the relationship between the sizeof the hypothesis space to be searched, the number of training examples available,and the confidence we can have that a hypothesis consistent with the training datawill correctly generalize to unseen examples.
The field of machine learning, and much of this book, is concerned withanswering questions such as the following:What algorithms exist for learning general target functions from specifictraining examples?
In what settings will particular algorithms converge to thedesired function, given sufficient training data? Which algorithms performbest for which types of problems and representations? How much training data is sufficient? What general bounds can be foundto relate the confidence in learned hypotheses to the amount of trainingexperience and the character of the learner's hypothesis space?
When and how can prior knowledge held by the learner guide the processof generalizing from examples? Can prior knowledge be helpful even whenit is only approximately correct?
What is the best strategy for choosing a useful next training experience, andhow does the choice of this strategy alter the complexity of the learningproblem? What is the best way to reduce the learning task to one or more functionapproximation problems? Put another way, what specific functions shouldthe system attempt to learn? Can this process itself be automated? How can the learner automatically alter its representation to improve itsability to represent and learn the target function?
Where possible, the chapters havebeen written to be readable in any sequence. However, some interdependenceis unavoidable. If this is being used as a class text, I recommend first coveringChapter 1 and Chapter 2. Following these two chapters, the remaining chapterscan be read in nearly any sequence. A one-semester course in machine learningmight cover the first seven chapters, followed by whichever additional chaptersare of greatest interest to the class.
Below is a brief survey of the chapters. Chapter 2 covers concept learning based on symbolic or logical representations. It also discusses the general-to-specific ordering over hypotheses, andthe need for inductive bias in learning. It also examines Occam's razor-a principle recommendingthe shortest hypothesis among those consistent with the data. This includes a detailed example of neural network learning forface recognition, including data and algorithms available over the WorldWide Web.
This includes the calculation of confidence intervals for estimatinghypothesis accuracy and methods for comparing the accuracy of learningmethods. This includes a detailed example applying a naive Bayes classifier tothe task of classifying text documents, including data and software availableover the World Wide Web.
Both symbolic and neural network algorithms are considered. The checkers learning algorithm described earlier in Chapter 1 is a simpleexample of reinforcement learning. The end of each chapter contains a summary of the main concepts covered,suggestions for further reading, and exercises. Major points of thischapter include:Machine learning algorithms have proven to be of great practical value in avariety of application domains.
They are especially useful in a data miningproblems where large databases may contain valuable implicit regularitiesthat can be discovered automatically e. Machine learning draws on ideas from a diverse set of disciplines, includingartificial intelligence, probability and statistics, computational complexity,information theory, psychology and neurobiology, control theory, and philosophy.
Much of this book is organized around differentlearning methods that search different hypothesis spaces e.
There are a number of good sources for reading about the latest researchresults in machine learning. Give three computer applications for which machine learning approaches seem appropriateand three for which they seem inappropriate. Pick applications that are notalready mentioned in this chapter, and include a one-sentence justification for each.
Pick some learning task not mentioned in this chapter. Describe it informally in aparagraph in English. Now describe it by stating as precisely as possible the task,performance measure, and training experience. Finally, propose a target function tobe learned and a target representation. Discuss the main tradeoffs you considered informulating this learning task. Prove that the LMS weight update rule described in this chapter performs a gradientdescent to minimize the squared error.
In particular, define the squared error E as inthe text. Now calculate the derivative of E with respect to the weight wi, assumingthat? Gradient descent is achieved byupdating each weight in proportion to -e. Therefore, you must show that the LMStraining rule alters weights in this proportion for each training example it encounters.
Consider alternative strategies for the Experiment Generator module of Figure 1. In particular, consider strategies in which the Experiment Generator suggests newboard positions byGenerating random legal board positions0 Generating a position by picking a board state from the previous game, thenapplying one of the moves that was not executedA strategy of your own designDiscuss tradeoffs among these strategies.
Which do you feel would work best if thenumber of training examples was held constant, given the performance measure ofwinning the most games at the world championships? Implement an algorithm similar to that discussed for the checkers problem, but usethe simpler game of tic-tac-toe. Represent the learned function V as a linear com- bination of board features of your choice. To train your program, play it repeatedlyagainst a second copy of the program that uses a fixed evaluation function you createby hand.
Plot the percent of games won by your system, versus the number oftraining games played. Psychological studies of explanation-based learning. DeJong Ed.
Boston: Kluwer Academic Publishers. Anderson, J. The place of cognitive architecture in rational analysis. VanLehn Ed. Hillsdale, NJ: Erlbaum. Chi, M. Learning from examples via self-explanations.
Resnick Ed. Hillsdale, NJ:L. Erlbaum Associates. Cooper, G. An evaluation of machine-learning methods for predicting pneumoniamortality. Artificial Intelligence in Medicine, to appear. Fayyad, U. Automated analysis and exploration ofimage databases: Results, progress, and challenges.
Journal of Intelligent Information Systems,4, Laird, J. SOAR: The anatomy of a general learning mechanism. Machine Learning, 1 1 , Langley, P. Applications of machine learning and rule induction. Communicationsof the ACM, 38 1 I , Lee, K. Automatic speech recognition: The development of the Sphinx system. Boston: KluwerAcademic Publishers. Pomerleau, D. Qin, Y.
Using EBG to simulate human learning from examplesand learning by doing. Rudnicky, A. Survey of current speech technology inartificial intelligence. Communications of the ACM, 37 3 , Rumelhart, D. The basic ideas in neural networks.
Communicationsof the ACM, 37 3 , Tesauro, G. Practical issues in temporal difference learning. Machine Learning, 8, Temporal difference learning and TD-gammon.
Communications of the ACM,38 3 , Waibel, A,, Hanazawa, T. Phoneme recognition usingtime-delay neural networks. This chapter considers concept learning: acquiring the definition of ageneral category given a sample of positive and negative training examples of thecategory.
Concept learning can be formulated as a problem of searching through apredefined space of potential hypotheses for the hypothesis that best fits the trainingexamples.
Tom Mitchell Machine Learning Books
In many cases this search can be efficiently organized by takingadvantage of a naturally occurring structure over the hypothesis space-a generalto-specificordering of hypotheses. This chapter presents several learning algorithmsand considers situations under which they converge to the correct hypothesis. Wealso examine the nature of inductive learning and the justification by which anyprogram may successfully generalize beyond the observed training data.
People, for example, continually learn general concepts or categories suchas "bird," "car," "situations in which I should study more in order to pass theexam," etc.
Each such concept can be viewed as describing some subset of objectsor events defined over a larger set e. Alternatively, each concept can be thought of as a boolean-valued functiondefined over this larger set e. This task is commonly referred to as concept learning, or approximatinga boolean-valued function from examples. Concept learning. Inferring a boolean-valued function from training examples ofits input and output. The attribute EnjoySport indicates whether or not Aldo enjoys hisfavorite water sport on this day.
The task is to learn to predict the value ofEnjoySport for an arbitrary day, based on the values of its other attributes.
What hypothesis representation shall we provide to the learner in this case?
Department of Computer Science
Let us begin by considering a simple representation in which each hypothesisconsists of a conjunction of constraints on the instance attributes. In particular,let each hypothesis be a vector of six constraints, specifying the values of the sixattributes Sky, AirTemp, Humidity, Wind, Water, and Forecast.
For each attribute,the hypothesis will either0 indicate by a "? To illustrate, the hypothesis that Aldoenjoys his favorite sport only on cold days with high humidity independent ofthe values of the other attributes is represented by the expression? In general, any concept learning taskcan be described by the set of instances over which the target function is defined,the target function, the set of candidate hypotheses considered by the learner, andthe set of available training examples.
The definition of the EnjoySport conceptlearning task in this general form is given in Table 2. The set of items over which the concept is definedis called the set of instances, which we denote by X. The concept or function to be learned iscalled the target concept, which we denote by c. In the currentexample, the target concept corresponds to the value of the attribute EnjoySport i.
The constraints may be "? When learning the target concept, the learner is presented a set of trainingexamples, each consisting of an instance x from X, along with its target conceptvalue c x e.
We will often write the ordered pair x, c x to describe the training exampleconsisting of the instance x and its target concept value c x. We use the symbolD to denote the set of available training examples. Given a set of training examples of the target concept c, the problem facedby the learner is to hypothesize, or estimate, c. We use the symbol H to denotethe set of all possible hypotheses that the learner may consider regarding theidentity of the target concept.
Usually H is determined by the human designer'schoice of hypothesis representation. Therefore, inductivelearning algorithms can at best guarantee that the output hypothesis fits the targetconcept over the training data. Lacking any further information, our assumptionis that the best hypothesis regarding unseen instances is the hypothesis that bestfits the observed training data.
This is the fundamental assumption of inductivelearning, and we will have much more to say about it throughout this book.
Westate it here informally and will revisit and analyze this assumption more formallyand more quantitatively in Chapters 5, 6, and 7.
The inductive learning hypothesis. Any hypothesis found to approximate the targetfunction well over a sufficiently large set of training examples will also approximatethe target function well over other unobserved examples. The goal of thissearch is to find the hypothesis that best fits the training examples.
It is importantto note that by selecting a hypothesis representation, the designer of the learningalgorithm implicitly defines the space of all hypotheses that the program canever represent and therefore can ever learn.
Consider, for example, the instancesX and hypotheses H in the EnjoySport learning task. Given that the attributeSky has three possible values, and that AirTemp, Humidity, Wind, Water, andForecast each have two possible values, the instance space X contains exactly 3. A similar calculation shows that there are5. Notice, however,that every hypothesis containing one or more "IZI" symbols represents the emptyset of instances; that is, it classifies every instance as negative.
OurEnjoySport example is a very simple learning task, with a relatively small, finitehypothesis space. Most practical learning tasks involve much larger, sometimesinfinite, hypothesis spaces.
We will be particula ly interested in algorithms capable of efficientlysearching very large or infinite hypothesis spaces, to find the hypotheses that bestfit the training data. By taking advantage of thisnaturally occurring structure over the hypothesis space, we can design learningalgorithms that exhaustively search even infinite hypothesis spaces without explicitlyenumerating every hypothesis.
Now consider the sets of instances that are classified positive by hl and by h2. Because h2 imposes fewer constraints on the instance, it classifies more instancesas positive. In fact, any instance classified positive by hl will also be classifiedpositive by h2. Therefore, we say that h2 is more general than hl.
This intuitive "more general than" relationship between hypotheses can bedefined more precisely as follows. Definition: Let hj and hk be boolean-valued functions defined over X. Then hj ismoregeneral-than-or-equal-to hk written hj 2, hk if and only ifWe will also find it useful to consider cases where one hypothesis is strictly moregeneral than the other.
The box on the left represents the set Xof all instances, the box on the right the set H of all hypotheses. Each hypothesis corresponds tosome subset of X-the subset of instances that it classifies positive. The arrows connecting hypothesesrepresent the more-general-than relation, with the arrow pointing toward the less general hypothesis.
Note the subset of instances characterized by h2 subsumes the subset characterized by hl, hence h2is more-general-than hl.
To illustrate these definitions, consider the three hypotheses hl, h2, andh3 from our Enjoysport example, shown in Figure 2. How are these threehypotheses related by the p, relation? As noted earlier, hypothesis h2 is moregeneral than hl because every instance that satisfies hl also satisfies h2. Similarly,h2 is more general than h3.
Note that neither hl nor h3 is more generalthan the other; although the instances satisfied by these two hypotheses intersect,neither set subsumes the other. They depend only on which instancessatisfy the two hypotheses and not on the classification of those instances accordingto the target concept. Formally, the p, relation defines a partial order overthe hypothesis space H the relation is reflexive, antisymmetric, and transitive.
Informally, when we say the structure is a partial as opposed to total order, wemean there may be pairs of hypotheses such as hl and h3, such that hl 2, h3 andh3 2, hl. The pg relation is important because it provides a useful structure over thehypothesis space H for any concept learning problem.
The following sectionspresent concept learning algorithms that take advantage of this partial order toefficiently organize the search for hypotheses that fit the training data.
They are not part of the course requirements. The standard textbook for computational learning theory is Michael J. Kearns and Umesh V. There are various books on support vector machines and related techniques, among which John Shawe-Taylor and Nello Cristianini: Kernel Methods for Pattern Analysis , Cambridge University Press is very recent and similar in spirit to our course.
Online learning Remark: Our reading list for online learning is rather diverse. This reflects the fact that online learning is not as established as the other topics of the course.
There are no text books and few survey articles. Also, this happens to be the personal favourite of the lecturer. For the other parts of the course, we can hopefully collect a more compact list of reference articles. The Weighted Majority algorithm is analysed in N.Book News, Inc.
This includes the calculation of confidence intervals for estimatinghypothesis accuracy and methods for comparing the accuracy of learningmethods. Applications of machine learning and rule induction. Despite the ambiguity inherent in estimating training values for intermediateboard states, one simple approach has been found to be surprisingly successful. However, algorithms have been invented that are effective for certain typesof learning tasks, and a theoretical understanding of learning is beginning toemerge.
Therefore, we say that h2 is more general than hl. Notice the training r Page and There are various books on support vector machines and related techniques, among which John Shawe-Taylor and Nello Cristianini: Kernel Methods for Pattern Analysis , Cambridge University Press is very recent and similar in spirit to our course.