著者
Concept Acquisition through Attribute Evolution and Experiment, Selection
タイトル
Klaus Peter Gross
日時
Aug 1991
概要
Robots must perform tasks despite the inevitable uncertainties that exist in their environment. Uncertainties arise from many sources. Feeders deliver parts with some uncertainty. Copies of the same part are not identical, but vary within specified tolerances. Sensing allows a robot to significantly reduce uncertainty and thereby extend the range of possible tasks. Sensor-based programs, however, are difficult to write, and are seldom, if ever, applicable to other tasks. Hence, automatic synthesis of robot programs is clearly desirable. Lozano-Perez, Mason, and Taylor[29] developed a framework for automatic synthesis of fine-motion strategies. The fundamental element in their framework is the pre-image: the set of states from which a goal can be attained in a single motion. Given a process for constructing per-images, backward-chaining is used to formulate a complete plan. Analytic techniques can often be used to construct the per-images. When analytic techniques are not available, empirical techniques can be used to construct the per-images. This thesis presents a method for learning from examples that allows robots to inductively construct a description of the per-images for a set of goals given a set of actions that achieve the goals. Learning from examples may be viewed as the search for consistent and concise concept descriptions derived from a set of training examples. Most of the previous research in this area assumes that the concept description language is static and that an outside source selects appropriate training examples. Unfortunately, these assumptions are not appropriate for learning per-image. This thesis develops CAT, a general data-driven method that semi-incrementally learns multiple disjunctive concept descriptions from examples. The system tolerates bounded measurement noise, dynamically evolves a concept description language, and actively selects training examples. The language evolution and experiment selection mechanisms improve the qualitative and quantitative aspects of the constructed representations.
カテゴリ
CMUTR
Category: CMUTR
Institution: Department of Computer Science, Carnegie
        Mellon University
Abstract: Robots must perform tasks despite the inevitable uncertainties
        that exist in their environment.
        Uncertainties arise from many sources.
        Feeders deliver parts with some uncertainty.
        Copies of the same part are not identical, but vary within 
        specified tolerances.
        Sensing allows a robot to significantly reduce uncertainty and
        thereby extend the range of possible tasks. 
        Sensor-based programs, however, are difficult to write, and are
        seldom, if ever, applicable to other tasks.
        Hence, automatic synthesis of robot programs is clearly 
        desirable.
        Lozano-Perez, Mason, and Taylor[29] developed a framework for
        automatic synthesis of fine-motion strategies.
        The fundamental element in their framework is the pre-image: 
        the set of states from which a goal can be attained in a single
        motion.
        Given a process for constructing per-images, backward-chaining 
        is used to formulate a complete plan.
        Analytic techniques can often be used to construct the
        per-images.
        When analytic techniques are not available, empirical 
        techniques can be used to construct the per-images.
        This thesis presents a method for learning from examples that
        allows robots to inductively construct a description of the 
        per-images for a set of goals given a set of actions that 
        achieve the goals.
        Learning from examples may be viewed as the search for 
        consistent and concise concept descriptions derived from a set 
        of training examples.
        Most of the previous research in this area assumes that the 	
        concept description language is static and that an outside 
        source selects appropriate training examples.
        Unfortunately, these assumptions are not appropriate for
        learning per-image.
        This thesis develops CAT, a general data-driven method that 
        semi-incrementally learns multiple disjunctive concept 
        descriptions from examples.
        The system tolerates bounded measurement noise, dynamically 
        evolves a concept description language, and actively selects 
        training examples.
        The language evolution and experiment selection mechanisms
        improve the qualitative and quantitative aspects of the     
        constructed representations.
        
        
        
Number: CMU-CS-91-186
Bibtype: TechReport
Month: Aug	
Author: Concept Acquisition through Attribute Evolution and Experiment
        Selection
Title: Klaus Peter Gross	
Year: 1991
Address: Pittsburgh, PA
Super: @CMUTR