Next lesson playing in 5 seconds

  • Overview
  • Transcript

4.5 Numeric Types

The last type to really get a handle on, or at least be aware of, and which is very common within most other programming languages, so it shouldn't come as much of a surprise, is the concept of numeric types. Now, they kind of fall into three categories. We have the integer types, so that means we're talking about Int, Int8, 16, 32, and 64, which obviously are talking about the bit size there. So when we're talking about an Int8, we're talking about values. Now remember, this is a signed value, so it's going anywhere from negative 128 up to 127. And then, we have 16, 32 and 64-bit versions of that. And then, the Int one is actually going to use whatever the word size is on the particular system you're on. So if this is a 32-bit system, you're going to get a 32-bit integer. And if it's a 64-bit system, you're going to get a 64-bit integer. Now, on the other side of that spectrum, we also have the concept of the unsigned Ints. Very similar mappings to the other versions, or the signed versions. So in this case, we're talking about 8, 16, 32 and 64-bit explicitly, where the UInt8, instead of getting negative 128 up to 127, we're getting from 0 to 255. And then, once again, the one where we're not specifying the bit depth on it is going to use whatever the value is of the word size for your particular system, whether it be 32-bit or 64-bit. So we have the integer values. We also have floating point types. So we have things like Float, Double, and Float80, and you'll also see we have 32 and 64 in here as well. But typically, what you're going to see here is that Float is going to represent the 32-bit floating point numbers. Double will represent the 64-bit floating point numbers. And then Float80 is actually going to use extended precision to give you all the way up to 80, 80-bit floating point numbers. So you have a wide range of values that you can use in the floating point space. Now, one thing to keep in mind, we have been talking a little bit about inferred types within the Swift programming language. So let's say we wanted to create a floating point number using implicit typing. So I can say, var floatValue is equal to, and we'll call this 23.45. Now, what is interesting about this is, regardless of how you do this, the default inferred type of any of these floating point values, if you do the initialization this way, is actually going to be Double. So, if you wanted to explicitly make this a Float, then you're actually going to have to explicitly define this var, value so it's not going to be inferred as a Double. And, as you can see, because I'm forcing this to be a Float, we get some kind of crazy values here. But if we just leave this off, it's going to assume that it's a Double, and we're gonna get the value that we are expecting here. So just kinda one thing to watch out for. And then finally, that, kind of interesting that it gets lumped into this category of numeric types, is the concept of a boolean. Which actually isn't that surprising if you really think about it, because typically in the world of boolean values, you're talking about either true or false. It's on or off. Those values are typically represented by a zero or a one. So a boolean falls into this as well, so you're typically gonna have values of either true or false. So those are gonna be your boolean values. So that's basically it in a nutshell, as far as the numeric types go. You have all the basic operations of mathematics built in, and you can use all of these different types, and you can kinda cast them back and forth, obviously watching out for precision. But that type of thing is not new in the Swift programming language. That is fairly common in all of computer science across all different programming languages that there are out there.

Back to the top