About Doug

Doug Morrison

Hi, I'm Doug Morrison. I'm a PhD researcher at the Australian Centre for Robotic Vision (ACRV) at the Queensland University of Technology (QUT) in Brisbane, Australia, supervised by Dr Juxi Leitner and Professor Peter Corke.

My research is developing new strategies for robotic grasping in the unstructured and dynamic environments of the real world, that is, strategies which are general, reactive and knowledgeable about their environments. The goal: create robots that can grasp objects anywhere, all the time.

I was also one of the lead developers of Cartman, the ACRV's winning entry into the 2017 Amazon Robotics Challenge!

News

  • May 2019 - I have been awarded as one of the Best Reviewers at ICRA 2019.
  • Nov 2018 - I am super excited to be interning with Amazon's Robotics AI team in Berlin for the next six months!
  • Sep 2018 - Awarded as a Finalist in the Amazon Robotics Best Systems Paper Award in Manipulation 2018 for "Cartman: The Low-cost Cartesian Manipulator"
  • Apr 2018 - I discuss robotic grasping and the ARC on NVIDIA's "The AI Podcast" [Listen Here]
  • Mar 2018 - Cartman's packing his bags again and heading to GTC 2018 in Silicon Valley, where Juxi and I will be giving a presentation. [Slides]
  • Jul 2017 - We won the Amazon Robotics Challenge 2017! Congratualtions to everyone involved!

Selected Publications

(See All)

.

Doug Morrison, Peter Corke, Juxi Leitner
International Journal of Robotics Research (IJRR), June 2019
In this extended version of "Closing the Loop for Robotic Grasping" (RSS 2018), we provide a more in-depth look at our real-time, generative grasp synthesis approach through extended analysis, neural network design and a new multi-view approach to grasping. Additionally, we extend our method to use the new Jacquard grasping dataset and demonstrate the ease of transferring our platform-agnostic approach to a new robot.
Doug Morrison, Peter Corke, Juxi Leitner
International Conference on Robotics and Automation (ICRA), 2019
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present. Where other approaches use a static camera position or fixed data collection routines, our Multi-View Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions.
Doug Morrison, Anton Milan, Nontas Antonakos
In robotics, decisions made based on erroneous visual detections can have disastrous consequences. However, this uncertainty is often not captured by classic computer vision systems or metrics. We address the task of instance segmentation in a robotics context, where we are concerned with uncertainty associated with not only the class of an object (semantic uncertainty) but also its location (spatial uncertainty).
4th Place - First ACRV Probabilistic Object Detection Challenge
Doug Morrison, Peter Corke, Juxi Leitner
Robotics: Science and Systems (RSS), 2018
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. The lightweight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies.
Doug Morrison, AW Tow, M McTaggart, R Smith, N Kelly-Boxall, S Wade-McCue, J Erskine, R Grinover, A Gurman, T Hunn, D Lee, A Milan, T Pham, G Rallos, A Razjigaev, T Rowntree, K Vijay, Z Zhuang, C Lehnert, I Reid, P Corke, J Leitner
International Conference on Robotics and Automation (ICRA), 2018
A system-level description of Cartman, our winning entry into the 2017 Amazon Robotics Challenge.
Finalist - Amazon Robotics Best Paper Awards in Manipulation 2018
A. Milan, T. Pham, K. Vijay, D. Morrison, A.W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, T. Hunn, N. Kelly-Boxall, D. Lee, M. McTaggart, G. Rallos, A. Razjigaev, T. Rowntree, T. Shen, R. Smith, S. Wade-McCue, Z. Zhuang, C. Lehnert, G. Lin, I. Reid, P. Corke, J. Leitner
International Conference on Robotics and Automation (ICRA), 2018
We present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored.