Artificial Intelligence: A practical example that can be applied to Construction and Manufacture in the Future

Today we follow all safety precautions when working with robots. In the future, some of those precautions may be mitigated.

Last week I covered the work of Principal Research Engineer, Dr. Hui Li, and how she is using artificial intelligence to improve synthetic data to train robots.

This blog post covers some of the work done by Senior Director of Machine Intelligence, Mike Haley, and Artificial Intelligence Lab Architect, Yotto Koga, is directly related. It is based on a talk that Mike gave at an internal Tech Summit where our developers gathered to coordinate their efforts and share best practices. Many people believe that robots will take their jobs, but at Autodesk, we want robots to augment our jobs. In the same way that our ability to know stuff has been augmented by a smartphone connected to the internet so we can look up information, we want robots to be able to do things for us; however, for robots to be able to work alongside humans, robots have to get smarter. Artificial intelligence can be applied in a variety of ways to make that happen.

  • We've all seen industrial robots that are used in assembly lines such as those in automotive plants.


    Though these robots are great at repetitive tasks, they are dumb when it comes to their environment. Everything has to be exactly in place. Setup of the assembly line is labor intensive. For example, in the picture above, if the car bodies coming down the assembly line are a quarter of an inch off, the robot that attaches the door hinges to the frame will attach those hinges one-quarter of an inch from where they need to be. In addition, if the design of the car frame changes even a little, the assembly line has to be reconfigured to accommodate the changes to the design. Getting robots to do the jobs of humans is not all tea and crumpets.

  • Now image if you wanted to have a robot that could take something that looks like this:


    and turn it into something that looks like this:


  • The first thing to do is attach a gripper to the robot arm.


    The gripper allows the robot to pick up the LEGO bricks.

  • The second thing to do is to attach a camera so that the robot has vision.


    Vision allows the robot to see the LEGO bricks.

  • Here's where machine learning comes into play. You can run thousands of trials where the robot can learn to recognize and pick up a LEGO brick.

    This is challenging because LEGO bricks come in different shapes and sizes. In addition, each brick can be in a variety of orientations, e.g., right side up, on its side, upside down. Also, the brick could be oriented at any angle. In the video above, the integration of a camera allows the robot to recognize the LEGO bricks regardless of orientation.

  • Once the robot learns how to recognize the bricks, it can then be taught to use that information to place the brick in the right location. This allows a picture like this:


    to result in actions like this:

    This process allows the robot to assemble bespoke objects without being specifically preprogrammed for that object.

    • So unlike an automobile assembly line that has to have everything in its place, an artificially intelligent robot can account for the LEGO bricks being in any random position.
    • Unlike an automobile assembly line that needs to be specifically reconfigured each time a design change is made to a car, an artificially intelligent robot can assemble anything that can be made with LEGO bricks without specifically programming it to make that thing.
  • But wait, there's more. Robots learn from repetitive trials. What if we replaced the camera that the robot uses to learn how to locate, identify, and pick up bricks with a simulation of LEGO bricks? In other words, instead of feeding the learning process with footage of actual LEGO bricks using a camera, we feed it with a simulation of a gripper picking up LEGO bricks. This is what the robot would see:


  • The beauty of this approach is that multiple simulations can be run in parallel.

    After all of the simulations have been run, the results can be combined. This allows a robot to learn in a fraction of the time. Instead of one robot performing 10,000 physical trials, 10,000 virtual robots can perform 10,000 simulated trials. The beauty of this approach is that it allows one robot to learn more quickly, but even more astonishing is that when one robot learns from the simulations, all robots learn from those simulations because the learned behavior can be copied from one robot to the next.

It's great to see machine learning in action. You can teach an old robot new tricks. As Autodesk has the Design Graph, one can imagine how LEGO bricks could be replaced by CAD parts resulting in robotic assembly of things. This notion can then be applied to construction for things like brick laying. Artificial intelligence will be coming to a project near you.

  • One day it will be possible to have a robot take anything like this:


    and make this:


Thanks, Hui, Mike, and Yotto for your work in this area.

Robotic assembly is alive in the lab.