@ControlPower, @ModelThinking

The algorithm of your life

This message is about algorithms, and how the principles behind them can help you to make your actions more satisfying for everyone (including yourself).

On all sides our lives are determined and controlled by software. Without the development of ever more sophisticated programs, our modern world would collapse: the greatest fear if the power would drop out for a while. Software is again composed of algorithms, almost mathematically designed recipes that regulate (parts of) processes. That’s not a new model that people invented. The basic principle of algorithms traditionally already lived in our genes, and only came out with the formalization of digital thinking according to Yin-Yang-alike concepts. For example, DNA is also a code used to shape our body. In any case, there is not much that man really “discovered”, not already being present in nature, even though the scientific formulations afterwards are often inimitable.

If algorithms are a reflection of our inner processes, we can also learn something about ourselves by looking closely at how developers deal with software.

Where do you encounter software? (click to fold out text)

With “software” many think of programs on PC or Mac, or apps in your gadget. You there have to select a program before it does something for you. However, a lot of software is invisible. Think, for example, of the operating systems Windows, Linux and Android. There hardly is a chance that you will deal directly with these. Furthermore, software can be hidden because the application is self-evident, such as your television or your MP3 player.

Software is hidden in many more places. To this end, you must realize that software is structured in layers:

Such a structure is called an architecture, and the more important the application, the more complex the architecture is! If you consider that (in this simple example) the blocks do not all have to be in the same location or at the same device, then you understand how complicated the real world is. For example, during mobile banking, your gadget is the GUI. Somewhere else servers hold the information about your bank account. And at yet another place, the executables and programs are maintained which every week give a message that due to mandatory maintenance you “temporary can not use the service”.

As mentioned, software is in many user products. Think of all your gadgets and household equipment. But there’s also a lot of software in cars, trains, ships and airplanes to measure distances, to communicate, to run engines economically, to determine your position via GPS, and so on. The whole internet with all websites is one big software structure. Traffic is regulated nationally via cameras, notice boards and traffic lights. Surveillance systems with image recognition know which cars drive where and when, recognize people at POS terminals, and through analysis of behavioral patterns one can timely intervene at quarrels in entertainment centers. The whole financial sector depends on software, and on the stock exchange the automated “traders” are the big earners for their owners. The industry can not do without software for logistics, stock management, distribution centers, process monitoring. In sports the video referee is introduced, the army is blind without advanced software, so it’s clear that you can not think of anything or a piece of software lurks. Even if you listen to an orchestra in a concert hall, software is behind an optimal acoustic experience, if only in the audio systems that apply sound effects.

And all this software uses algorithms.

If all software uses algorithms, and we can learn something about how we can increase our self-reliance, then we have to dive a bit deeper into the matter.

An example of algorithms. (click to fold out text)

Lately everyone hears and reads about problems with the search algorithms of Google, or the algorithms behind Facebook, Twitter, LinkedIn and so on.

  • At Google, search results are affected by commercial interests. Companies that are more loyal to Google (ads, sponsoring etc) will be ranked higher in the search results. The same applies to comparison sites, for example for hotels or mobile phone subscriptions, where the site gets a bonus when users complete a booking. In addition to positive discrimination, there are algorithms that exclude certain content, such as explicit sexual expressions, bad language use, and calls for violence and terrorism.
  • What you see in your “feed” on Social Media (the list that is presented to you as soon as you log on) is determined by your interests, your contacts, how many and which “likes” you receive and place, which messages you submit yourself, whether you respond to messages from others and so on.
  • The unsolicited ads that you usually see on your screen are the result of algorithms that take a quick decision about who is entitled to offer something and what fits best to your search history. Have you, for example, just looked at what kind of suitcases are available, then you know for sure that the next week you will be bombarded with similar offers.

These algorithms are maintained by developers. Under the guise of a better user experience, all sorts of commercial goals are pursued. The fact that these two are linked is also logical: a poor user experience (seeing things you are absolutely not interested in) casts a shadow over the internet company and the advertising organizations that provide the revenue. There is, however, a very big disadvantage to this mechanism. A profile of each user is kept in a database. There are a number of almost invisible marketing enterprises that become filthy rich by collection and trading of profile data. You recognize them, for example, through the unsolicited mailings they send with completely incomprehensible sender addresses (unfortunately, to discover that you have to look at the source text of your mail). Do not think too easy about such a profile: it can be a dynamic database, depending on the time of the year, actual holidays, and tailored to average behavior in certain age groups or social contexts. By clicking on something you express a preference or interest. This gives an element of your profile more weight. The next time your search results are more in line with that element because it is thought that you find it more interesting. It makes sense that you click on it again. And with that the self-confirming preference cycle is closed. In the end, the free choice that you would like to have, gradually disappears, the “out-of-the-box” idea you are looking for fades away. It is just like with your brain: that what you give a lot of attention is developed stronger and becomes more dominant. To do something about this you have to “fool” the algorithms, for example by asking the exact opposite. Or regularly enter all sorts of nonsensical searches, so that the systems are polluted by noise. Somewhat like “FlipThinking” is meant to free your brains from their preferred reasoning patterns.


4 Lessons by algorithms.

An algorithm defines how and under which circumstances (together: conditions) a certain act (or action) takes place. Just as in a cooking recipe: “Once the water boils (= condition) add  (= action) a pinch (= condition) of salt (= condition).” Note that the amount of salt is also a precondition for the operation: you should not add too much. The recipe is actually: “add something (= variable) to something else (= variable) when that reaches a certain state (= variable)”. Such a recipe can also be applied to, for example, sugar with tea, or meat in the casserole, or gasoline in your car. The variables are simply filled in differently, depending on the need during execution of the algorithm. This is typical of an algorithm: with a good developer it is universally usable by applying variables. A less experienced developer pollutes and limits an algorithm with pre-filled information. With people this is no different: a free thinker has learned to disconnect from prejudices and other unconscious data, and can thus come to sensational insights (lesson one).

Usually a certain sequence of actions is needed to achieve a result. Because the world is more complicated, different actions also take place simultaneously, and they can be mutually dependent on each other. In our heads we also know this kind of thinking patterns:

  • first this, then that
  • if this, then that; unless other condition, rather do something else
  • while that action, do this action, and otherwise do something else
  • depending on this, do that

Common spoken language has a lot of this kind of nuances to name actions and preconditions. In software algorithms thit is simplified in a creative way, but that is irrelevant here (even though software resembles more and more like ordinary human language). The second lesson we are learning here is that by exploring our own language usage we can expose our thinking patterns. You then discover what you find important, and you recognize what you take for granted as a condition or action. Feel free to ask questions about that. Are there no other possible actions, alternatives or preconditions that also play a role? Using the bold printed words you spontaneously make connections, but are they good for you right now?

A huge number of algorithms are active in the world around us. Trust me, the developers really haven’t all agreed in advance how everything should work together. In some cases they have defined standards, say law books that describe to what the outcome of an algorithm must comply. Consider, for example, a WiFi standard, or the MP3 and MP4 encoding of music and film. But with the growth of applications and the scale on which they operate globally, those standards are not always sufficient to prevent software conflicts. Often these are interpretations (a file does play on an Android device, but not on Apple), or attempts to stretch the boundaries of a standard (more data, higher speeds). Of course the standard should not limit the desired possibilities for the end users. By regulation, the mutual coherence between applications must be guaranteed. If that does not work anymore, the standard is ready for a revision.

In norms, values ​​and convictions, we also have a number of standards in our heads. The third lesson tells us that we should not be too rigid about this. It is always necessary to check whether our standards are in line with those of others. We must also be critical if we do not achieve our desired results in our lives, or whether our standards do not limit us too much. It is good to realize that many of these norms, values ​​and beliefs are trained during your upbringing and school time. In an increasingly fast-changing world, some of those “standards” may certainly require a revision. You determine your rules of life, and not vice versa! Of course you are not alone in the world, and so you have a lot to discuss with your environment. But that is the beauty of life: discover and develop together! And if you don’t come to an agreement, well, then you just change your variables. In other words: choose a different environment, different partner, different sports club, other work, and so on.

With the ever-advancing technology one succeeds in making very powerful processors. Lower dissipation, higher transmission speeds, higher data densities of storage media and more powerful communication channels result in gigantic data centers with in the near future also quantum computers. You hear more and more about the rise of artificial intelligence (AI = “Artificial Intelligence” ), where systems become self-thinking. Deeply hidden in it are algorithms that do not always perform the same trick statically, but that can dynamically rewrite themselves based on information from their environment, or conclusions they have drawn previously. Such systems are so complex that no one can accurately oversee or predict the entire operation. The ethical-philosophical question who then defines the norms, values ​​and convictions of such AI, and to what extent these are really guaranteed, goes too far here. Visionary writers such as Isaac Asimov already have elaborated on models, which they later found were not sustainable.

This progress towards supercomputers gives many developers the answer to the problem of software conflicts: let them figure out what works best. Algorithms may adapt themselves if that is better for the whole. How exactly that works is again a bridge too far here. But what we can learn as a final lesson from this argumentation is that it’s okay to rewrite our own approach. If we follow certain methods, if we always use certain patterns of thought, and we notice that we’re not pleased with the outcome, then it is entirely right to change our attitude. However, that is not so easy (by the way, applies to all 4 lessons here). You probably need help from good friends, a coach or even a therapist. As long as you remember that it is OK if you want to change something in your life!

So, summarizing the 4 lessons mentioned:

  • Free yourself from prejudices and opinions.
  • Evaluate your use of language to understand your behavioral and thought patterns.
  • Determine your own norms and values in life.
  • Review your approach if that’s better.

Finally something funny: an example of an alternative calculus algorithm…

Similar posts...

Leave a Reply