Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From $16.50/m
  • Overview
  • Transcript

2.1 The Core ML Model

In this lesson we are going to focus on the Core ML models that contain the machine learning algorithms, and I’ll show you how to convert models from external tools.

Related Links

2.1 The Core ML Model

Hi, and welcome back to Image Recognition on iOS 11, with CoreML. In this lesson, we get started with machine learning on iOS by looking at the CoreML model. Regardless of what you're trying to achieve with machine learning on iOS, at the core of it, there's always a model. It is the most important part of your system, as it contains the classification algorithm. Since CoreML isn't capable of training the system yet, you have to do it with an external tool. For neural networks, it supports models from Keras or Cafe. Tensorflow, although, a very popular framework, is not supported directly. You can use some of the low-level API of the CoreML converter tool, to do it yourself though. Let's look at an example from Apple's documentation. It is a model to determine the pricing for a Mars habitat. In this case, it's a pipeline regressor, another type of machine learning algorithm. It has one class that is the Mars habitat pricer and three inputs. The number of solar panels, the number of greenhouses, and the size and acres. There is just output. That is the price in million dollars, I presume. To use it you can instantiate the model and then use the protect function on the model to get the price. You have to pass in the parameters to that function the model expects. Since there can be multiple output parameters, it returns an output object. From there you can create the price. Let's have a closer look at the generated interface for the model. Everything you can see here, can also be created manually by yourself, if there is the need for it, for instance if a lot of data gets copied around excessively. You can see that there is also a Mars Habitat Pricer Input class that inherits from MLFeatureProvider. It is used to access features by first providing a list of feature names and a function and then an [INAUDIBLE] for the features value. The Mars Habitat Pricer Output works the same way as it is also a Feature Provider. Finally, the price itself is just a convenient class, that supports loading the compiled CoreML model from the bundle and drum protections on it and converter output to our custom output. As you can see, using a CoreML model isn't really that difficult. Let's create a new project that we can use with CoreML. I'm going to make it a Single View App and just call it CoreML demo. As a model, I'm going to use a very simple cycle to linear or regression model and convert it to a CoreML model, so you can see the workflow that is going on here. Many of the machine learning libraries are working in Python. So is the Core ML tooling. I'm going to assume you already have Python and Pip installed, which you can do with Homebrew for instance on a Mac. Note, that Core ML tools will only work with Python 2.7. I'm going to install the SciPy, NumPy and Scikit-learn packages, as well as CoreML tools with pip. Now let's open the file I have prepared. So, here is my simple linear regression model. I'm now going to save it as CoreML model file. To do this, we first have to import a CoreML tools, then we can convert to linear model by using the sklearn.convert. To rename the input value, I can pass in an array as the second parameter. The third one would be the output names, but it will be named prediction, which is fine by me. Now we can add metadata like the alpha name, or descriptions for input and output. Finally, we can save the model, I'm going to say that with a capitalized name, since CoreML will use that to generate the class name in Swift. Let's run it. Which creates the file for us. We can drag that file and the new project to use it, if we have a lock and chose the input and output as well as the name author. It is a type pipeline regressor. I'm going to use it in the app delegate and calculate the value directly after launching. There will be more into action on next lesson. First of all I need to create a new model object from the linear regression class. Then we can call model.prediction passing in the value and storing the output in a variable. Since this can throw, we have to use try. And finally, that imploding if it fails, since this is just a demo, so I'm using try with a bang. I'm going to use an input of 2.3 randomly. Finally, we can print the outputs to the console with the print function, accessing the prediction property. Let's build and run this in the simulator. After the simulator launches, you can see in the X code console, there is an output produced by the model. Great. To recap, CoreML doesn't yet support creating and training machine learning algorithms. You need an external tool and covert it to a CoreML model. The model automatically generates a convenient class to use in Swift. In your application, you can instantiate the model and use prediction to get an output for your input parameters. In the next lesson, I'm going to use the Vision Framework to do Image Recognition and Classification. See you there.

Back to the top