One of the biggest differences between well trained software engineers and those who are self-taught programmers is the way they manage data in programs. When working on programs of non-trivial complexity, the trained professional will spend a great deal of his time working out how to organise the information that must be handled. The self-taught programmers tend to give most of their attention to constructing the algorithms.
We all have some intuitive understanding of algorithms: cooking is based around algorithms ("recipes"); everything we learned in school mathematics is about algorithms; even in subjects such as English and foreign languages, we were taught algorithms for spelling, plurals, tenses, word-ending agreements and so on. (Parents of primary school children in the UK will find that teachers now use the word "algorithm" quite a lot as a method of introducing computing concepts to children.) Few of us have any level of sophistication in methods of handling information either intuitive or taught.
Before we go any further, incidentally, we might as well clear up a distinction between the terms data and information. Data consists of the numbers and words stored in computer memory (or even on paper sheets). Information is data that has meaning associated with it. The same number (as stored in a computer file) can be used to designate the length of a journey (in kilometres/miles?) or the weight of a bag of flour, or the time (in miutes?) it take to perform a task. The number is of little use until it is assigned an interpretation.
Computer algorithms do not know anything about the meaning of the data they handle. They will perform the same calculation using our numbers whether they are distances, times, or weights. It is up to the program designer to ensure that the numbers use by the program have been given a consistent interpretation. (A spacecraft sent by NASA to land on Mars notoriously crashed into the surface because a measure of distance was assumed by some parts of the guidance software to be in meters and in other parts to be in miles.)
Programmers often make mistakes when constructing software. The ones that are easiest to find are those where the error is in the specification of an algorithm, because odd behaviour caused by an error in one program line can usually be noted and tracked to its source with systematic searches. An error in interpretation of meaning often produces much more subtle failures and can be extremely difficult to track to the source of the problem because it is not localised.