Autonomous Visual Learning for Robotic Systems


Beale, D., 2012. Autonomous Visual Learning for Robotic Systems. Thesis (Doctor of Philosophy (PhD)). University of Bath.

Related documents:

PDF (Thesis) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (64MB) | Preview


    This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of different poses, giving a good foundation to build a robust classification model. Contributions include a new segmentation algorithm, a new classification model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.


    Item Type Thesis (Doctor of Philosophy (PhD))
    CreatorsBeale, D.
    Uncontrolled Keywordsrobotics, machine learning, computer vision
    DepartmentsFaculty of Science > Computer Science
    Publisher StatementUnivBath_PhD_2012_D_Beale.pdf: © The Author
    ID Code32210


    Actions (login required)

    View Item

    Document Downloads

    More statistics for this item...