FREELessons: 13Length: 1.2 hours

Next lesson playing in 5 seconds

  • Overview
  • Transcript

2.1 Detecting Planes

Plane detection is one of the most important concepts of AR applications—it's what lets your AR scene align with the real world. In this lesson, you'll learn how ARKit provides plane detection, and you'll learn how to make use of the planes that it finds.

2.1 Detecting Planes

Hi, and welcome back to Get Started With Augmented Reality for iOS. In this lesson, we are going to get started with ARKit and talk about plane detection. Before we start with plane detection, let's talk more about tracking in ARKit in general, because this is really the basis for augmented reality. So, how does the framework do it? The basis for our ARKit application is an instance of ARSession. It is responsible for performing image analysis and motion detection. Each session is initialized with an ARConfiguration, that can be either an orientation tracking configuration or ARWorldTrackingConfiguration. If you have a look at WWDC talks, be aware that the naming change between then and the final release now. Not every device supports world tracking, which includes all six degrees of freedom meaning orientation and position. AR orientation tracking configuration only supports three degrees of freedom, like the name suggests, only the orientation. And provides less immersive experience, but is available on more devices. To have all ARKit features available, you need to have an A9 processor or higher, which was introduced with the iPhone 6s. To start an ARSession, you can call a run with a configuration on it. You can also pause the session, for instance, when the view is no longer visible on the screen, by calling pause to save on computing power and therefore battery. To restart, you can call run again including the configuration. This also works if you want to transition from one configuration to another. ARKit handles this seamlessly. For starting a session, the object creates an AVCaptureSession as well as a CMMotionManager to perform image analysis, as well as track the device's motion. Since ARKit is a high-level framework, you don't have to get into how it actually works. When starting a session, you're device's location will automatically be the origin of the coordinate system. Also, the unit length is 1 meter. To have accurate tracking, it is obviously necessary to provide the session with images that can actually be tracked. If you are standing in front of a white wall or pointing at a very uniform plane surface, ARKit will have a hard time determining what is there. That needs an interesting scene to determine tracking points that they used to calculate movement and scale. Not only are those feature points important for the augmented reality experience, another important technology is plane detection. This is very important because it allows you to place objects in the real world, by referencing surfaces to put them on, be it the floor or a table surface. Planes can either be infinite, which is useful for a ground plane or limited to an extent. You don't want to have your objects floating in the air because the table ends it. We can enable plane detection by setting the planeDetection property of your configuration to your horizontal. Right now, planeDetection is only available for the horizontal surfaces. But I guess that this will be extended in the future for detecting rows and even non orthogonal planes as well, but a way the API is designed. Don't forget, this is the first version of ARKit. When the framework matures, it will get more and more features added to it to be more usable and resilient. Before we get into the practical part of this lesson, let's talk about the final clause in ARKit, ARAnchor. An anchor is a fixed position in a scene that has position and the orientation and can be used for placing objects. There is a special anchor type, that is a planeAnchor, that includes the orientation of the plane with respect to gravity. A center point and an extent that determines the width and length of a plane. So let's get to the practical part of this lesson. Just so you know, I'm using a beta version of Xcode but you don't need to. I was just stupid enough to accidentally install the iOS 11.1 beta on my phone and can restore it to a previous version. Everything I do here works perfectly in iOS 11.0. I'm going to create a new application Xcode by using the ARKit project template. Here we can choose a name and team, as well as the language we want to use. In our case, it's Swift. Finally, we can set the content technology. This defines which renderer we are using. The options are either SceneKit for 3D scenes, SpriteKit for 2D scenes. Or if you're doing your own thing, you can use the metal framework to integrate ARKit with a custom rendering engine. We are going to use SceneKit. After we've created the application, you can see the project configuration. Since we need the camera accelerometer and gyroscope, you can't use the simulator for this. So be sure you have your team selected. I had it already selected in dialogue, so this is already set up for me. Also make sure you have your phone setup. Xcode 9 supports wireless debugging, which will really be handy for augmented reality apps, since you aren't required to have a cable attached anymore and can move around more freely. So let's go to the view controller file, where the meet of the application is located. The template is pre-configured to render filer jet or spaceship, I'm not really sure what it is. And you can build and run it if you want and explore it yourself. I'm going to skip that and get right into code. As you can see there is a scene view property that is of type ARScene view. This is a regular SceneKit view, it is extended to have AR abilities. In viewDidLoad, the scene view set lets delegate and shows the statistics for an ARSession. This is useful because you can see the performance stats as well as number of tracking the points and so on. Then they create a scene. The scene that's loaded from this asset, I'm going to remove that because we don't need it for our example and model loading is part of the next lesson. In view will appear a new world tracking configuration as a creator. Here, we can add plane detection to it right away. The session will start and then view will disappear, it gets paused. The view controller conforms to the ARSCNView delegate, and it is delegate for SceneView. So we can use a callback method to render a plane whenever we detect one. The callback method we are looking for is for anchor. When the method is called it means that the new node was created for an anchor. So let's add a plane. I'm going to go over this rather quickly as creating geometry and nodes is also more of the focus of the next lesson. So first let's see if the anchor is actually a plane anchor. We can use a guard clause for this. Then, we need some geometry, in this case SCPlane. With the width that is the plane's anchors, x extent and the height that is the z extent. Then we have to create a node from the geometry and position it in the world. We can use the SCN center of the anchor again. Because the plain is vertical by default, we have to rotate it so it's actually horizontal. Then I'm going to add it to the dictionary, so we can update it in the future and that is as a child to the node provided in the callback. Let's add the dictionary as a map of UUIDs and SCNNodes and initialize it right away. Since the node can change when the understanding of the scene improves, we might need to update it. This can happen and rendererdidupdate node for anchor. The reason why I stored the plane as a dictionary is so I can fetch it again here. And update the geometry's height and width as well as the position to a new center. Finally, I'm also going to add the callback for removing a node. This can be called when the ARKit merges two planes, because they're at the same ones which was null before. I'm going to fetch the nodes from the dictionary again here and just remove it from the parent node and the dictionary. So it's time to try it out. The office I'm in isn't the ideal spot because there is a lot of reflective surfaces and a lot of vision noise on the floor. But let's walk over to this big table and see if we can detect it. It works okay and we can see the white plane rendered on the table and update as we move around it. To recap, you use an ARConfiguration to run your ARSession, you can also pause the session. Plane detection is an option of the configuration. Tracking is easier if the scene is visually interesting and not just a plain color. When a new anchor is added, ARKit informs you with callback. In the next lesson, I'm going to show you how you can add 3D objects to your scene. See you there.

Back to the top