The numbers used by computers are not quite the same as those you have been taught about in your school maths lessons - but most of the time you will not notice the difference. You will notice the difference when something goes wrong because you are expecting them to behave like your school numbers and they don't.
Integers (Counting Numbers)
We all know how to count. You start at 1 and go on and on upward, as high as you like. These numbers are known to mathematicians as the Natural Numbers.
The number zero was a big discovery of Indian mathematicians in the about the ninth century, and like most really important discoveries that we use all the time we forget that it had to be discovered. If you find it difficult to believe it was so important, try doing multiplication using Roman numerals.
Later on, we had negative numbers (-1, -2, -3 etc.) which were not completely accepted by European mathematicians until the 17th Century (though they had been used in China as long ago as 200 BC). They can go as low as you like. Whatever negative number you think of, you can always subtract 1 from it to get an even lower number. So, mathematical integers - the counting numbers - go on forever in either direction.
Computers count in binary using patterns of positive and negative electrical charges in the computer memory. Each location in memory capable of holding a positive or negative change is known as a bit. Before you run any program (and to some extent before you design the computer hardware) you have to decide how many bits you will put aside to represent integers. The typical choices at the time of writing are sometimes 32 and, increasingly, 64. In most modern computers the 32-bit representation allows us to count from -2,147,463,648 up to 2,147,483,647 - which is 2 multiplied by itself 31 times, minus 1. The 64 bit representation goes up to 9,223,372,036,854,775,807. If you add 1 to the number 2,147,483,647 in a computer with 32-bit integers you will probably get the answer -2,147,463,648. This is not the mathematical behaviour you expect from integers, but it makes sense when you thing about the way addition works in computers. Adding one to the bit pattern that represents the largest integer must produce another bit pattern, which can not be a larger integer. It often happens that it is the bit pattern for the smallest integer. (This depends on hardware design and is not always true. Sometimes the operating system catches the invalid operation and reports an error. You can look up Wikipedia if you want to know exactly why all this is so - but unless you are going to be a computer scientist it will not actually help you much.)
So, we do not have an infinite number of computing integers, and in fact although 2,147,483,647 (even more so 9,223,372,036,854,775,807) is a very large number most modern computers could count this far in a second or so. Although there are very few occasions when one actually needs to count to more than two billion, there are rather more occasions when we get some calculation wrong and numbers like this start turning up uninvited and like many uninvited guests they start behaving badly.
Now an important point that sounds trivial - but it is not so. Computing integers are exact: 5 means exactly the number five. This is not always the case with other ways of representing numbers in computers.
Decimals and Floating Point Numbers
These are numbers like 3.456 or 365.24. A number like this is really a fraction (you could, for example, write it as 36524/100). If you give two fractions (however close together) to a mathematician he will tell you that he can find an infinite number of new fractions between them. (This is easy: divide the interval into two, then divide by two again an so on...)
We need floating point numbers because a lot of calculations - for example in physics - do work with numbers much larger than two billion or very, very much smaller than 1 (for example when talking about the size of the universe or the size of atoms). Hence, we allow ourselves a certain number of decimals digits (say 6 or 7) and then a power of ten 10=101, 100=102, 1000=103, 10000=104 and so on allowing us to define values such as 1.4960 x 1011 (distance from Earth to Sun in meters) - or 9.10938356 x 10−31 (the mass of an electron in kilograms).
You can probably guess from our experience with the integers that computers with a fixed number of bits are only able to represent a finite number of possible fractional values exactly. If you try to get the computer to calculate a value between two of the adjacent representable fractions that it can manage then you will clearly not get exactly the right answer - you get one or the other representable values, so you have a rounding error. (Hence, the mass of the electron could not be represented exactly in 32 bit arithmetic - too many digits-, but it could in 64 bit arithmetic.) Programmers doing physics soon learn that you have to be very careful doing arithmetic with very small numbers (which cannot be accurately represented) because you can get very big errors building up in the arithmetic.
Computers have been designed so that the accuracy is good enough for most practical purposes, such as doing accurate weather predictions, providing you are not careless with your programming. When you do complex maths on a computer you always need to worry about small rounding errors at one point getting magnified into large errors later in the calculation. You may not think that we are going to need particularly complex maths to produce our images, and on the whole this is true. Some people, however, may like to investigate using fractals to produce images and it is just possible that they do not get the results they expect because of problems of this type.
Wikipedia tells the whole story for those who want to know more.
For computers, therefore, integers and floating point numbers are not interchangeable. So, the integer 3 is not the same as 3.0 because the bit patterns of electric charges in the computer memory are completely different.
In fact, when we set aside memory to hold numbers in the Processing language we always have to say whether we are trying to represent integers or floating point numbers as in, for example:
int number_of_sides;
float angle;
The computer has special circuits to take the bit patterns for two integers and add them together spitting out the bit pattern representing the sum of the two integers. It can also do integer multiplications, and so on. It has different circuits for adding and multiplying floating point numbers. It does not in general have circuits that let it add or multiply integers and floating point numbers directly.
Therefore, we can only get the computer to add the number stored in an integer memory slot to that in a floating point memory slot, if we convert one of these numbers into to other type of bit representation. (This is a special operation built into the computer design.) So, when faced with 5 + 365.24, we could convert 365.24 to an integer (dropping the 0.24) and get 365. If we convert 5 to a floating point number we get a number 370.24, which is probably what we want most of the time.
You could write "float(5)+365.24" explicitly to make it clear exactly what is going one. This gets tedious so in languages like Processing, much of the time the computer uses built-in logical rules deduce what you intend from the context and automatically perform the conversion for you: it can usually tell whether you need the answer as an integer or a floating point value because of what you do next with the value. There are, however, plenty of other occasions when Processing will decide that it cannot be sure if you really meant to do a conversion - or just forgot about this complication and are heading for an unexpected arithmetic error. In these circumstances Processing will declare a "type conversion error" and then you will have to tell it to do the type conversion explicitly. (You could use an expression like "5 + int(365.24)", or "float(5)+365.24".)
The moral of this tale is that you cannot quite forget that underneath your program there is a machine doing very complicated things. In the early days of computing, programmers had to remember lots of complicated rules and so programming was a difficult craft carried out by very few, very bright people. Modern computer languages now usually do as much as they can to avoid you having to know too much about what goes on under the bonnet - but you cannot quite escape it. When things go wrong it can be because you are expecting the computer to understand your intentions in one way, and the rules that it works by actually mean something else. The designers have done the best they can so this is clearly your problem. Programming systems try as hard as they can to help you to avoid making mistakes, but there is always a level at which you have to take responsibility for understanding what you are actually doing.