Synthesis lectures on artificial intelligence and machine learning,
Volume Designation
#6
ISSN of Series
1939-4616 ;
INTERNAL BIBLIOGRAPHIES/INDEXES NOTE
Text of Note
Includes bibliographical references (pages 95-112).
CONTENTS NOTE
Text of Note
Introduction to statistical machine learning -- The data -- Unsupervised learning -- Supervised learning -- Overview of semi-supervised learning -- Learning from both labeled and unlabeled data -- How is semi-supervised learning possible -- Inductive vs. transductive semi-supervised learning -- Caveats -- Self-training models -- Mixture models and EM -- Mixture models for supervised classification -- Mixture models for semi-supervised classification -- Optimization with the EM algorithm -- The assumptions of mixture models -- Other issues in generative models -- Cluster-then-label methods -- Co-training -- Two views of an instance -- Co-training -- The assumptions of co-training -- Multiview learning -- Graph-based semi-supervised learning -- Unlabeled data as stepping stones -- The graph -- Mincut -- Harmonic function -- Manifold regularization -- The assumption of graph-based methods -- Semi-supervised support vector machines -- Support vector machines -- Semi-supervised support vector machines -- Entropy regularization -- The assumption of S3VMS and entropy regularization -- Human semi-supervised learning -- From machine learning to cognitive science -- Study one: humans learn from unlabeled test data -- Study two: presence of human semi-supervised learning in a simple task -- Study three: absence of human semi-supervised learning in a complex task -- Discussions -- Theory and outlook -- A simple PAC bound for supervised learning -- A simple PAC bound for semi-supervised learning -- Future directions of semi-supervised learning -- Basic mathematical reference -- Semi-supervised learning software -- Symbols -- Biography.
0
SUMMARY OR ABSTRACT
Text of Note
Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semisupervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semisupervised learning, and we conclude the book with a brief discussion of open questions in the field.
ACQUISITION INFORMATION NOTE
Source for Acquisition/Subscription Address
Safari Books Online
Stock Number
CL0500000344
OTHER EDITION IN ANOTHER MEDIUM
International Standard Book Number
9781598295474
TOPICAL NAME USED AS SUBJECT
Machine learning.
Supervised learning (Machine learning)
COMPUTERS-- Enterprise Applications-- Business Intelligence Tools.