in

Why algorithms in social networks are so strange: the futurologist answers

Jaron Lanier, a futurologist and one of the creators of the term “virtual reality,” discusses in his new book “10 arguments to delete all your accounts on social networks” about why algorithms on Instagram and Facebook sometimes work so strange.

You may have heard about the confessions of the founders of social network empires, which I prefer to call “empires of behavior change”.

For example, Sean Parker, the first president of Facebook said:

“We need to give you dopamine from time to time – to show that someone has rated your photo, or post, or something else… A feedback loop with social approval… this is exactly what would a hacker like me think of, since we play on the vulnerabilities of human psychology… Inventors and creators – me, Mark [Zuckerberg], Kevin Systrom from Instagram – we were constantly aware of this. And still they did it… your relations with each other and society are literally changing… Perhaps networks even kill the ability to work productively. God alone knows that all this does with the brains of our children. ”

<…>Let’s look at this amazing phenomenon. Its essence is not in the coordinated work of positive and negative feedback, but in the fact that unpredictable feedback can become a more powerful motivating factor than the most ideal one.

If you are a child and get candy every time you say the word “please,” you may begin to say it more often. Now let’s consider that one day you don’t get your candy. It can be assumed that from now on you will say less “please”: why try, if the result is not guaranteed. Is it logical?

But sometimes the exact opposite happens. As if your brain, from your very birth, accoustomed sub-consciously finding chains in everything, can not resist the challenge. “There must be some trick,” he whispers. And you continue politely ask in the hope of understanding the deeply hidden pattern. However, it does not exist, and there is only an accident without any double bottom.

Creating a little bit of randomness on social networks is easy.

Since the algorithms are imperfect, randomness will be genuine. But in addition to this, an element of intentional randomness is added to the news feeds. The reason for this lies in the field of basic mathematics, and not human psychology.

Social networking algorithms are usually “self-learning.” This means that they are constantly undergoing small changes designed to show the best results. “Best” in this case means involving more people and, thus, bringing more benefits. There is always a small element of randomness in such algorithms.

Suppose a normal algorithm shows you an ad for selling socks or other products about five seconds after you watch a cat video. A self-learning algorithm will check to find out what will happen if this time interval is reduced, say, to four and a half seconds. Would you like to buy a product stronger?

If so, it will adjust the display time of not only your future announcements, but also the announcements of thousands of other people who more or less coincide with you even in color preferences, even in driving habits.

Sometimes self-learning algorithms fail, then they roll back their parameters to the previous indicators.

If after adjusting the time interval to four and a half seconds, and up to five and a half seconds, you will be less willing to buy socks, than future ads will be shown to you with reduced time. Based on the available evidence, the algorithm will accept the five-second waiting interval as the best of all possible. In theory, if small random changes do not help, the algorithm ceases to adapt. In fact, self-learning algorithms never lose hope of improving themselves.

What to do if only more serious changes can improve the result? It may be worth setting the interval to two and a half seconds, but constant additional corrections will not help to figure it out, since the algorithm is stuck on a five-second interval. That is why self-learning algorithms have the possibility of great chance. Often, an algorithm reaches a new level of efficiency when a crash/bug occurs in almost perfect settings.

Such a mechanism of spasmodic development is often found in self-learning systems. An example of such mutations also occur during biological evolution due to an ever-increasing number of events. This process is based on natural selection, when the genes of an individual are either transmitted to descendants or not. Mutation is an unpredictable factor that opens up new possibilities, a sharp leap. Each new mutation adds strange features to the mind that make it stronger.

While the advertising algorithm affects the emotions of a person, an accident that facilitates the adaptation of the program fuels people’s addiction.The algorithm tries to catch the ideal parameters for brain manipulation, and the brain tries to find a deeper meaning in this and changes its reaction in response to the experiments of the algorithm. At the heart of this cat and mouse game is pure math. Since the stimulus of the algorithm are random, and therefore mostly meaningless, the brain does not adapt to something real, but to fiction. This process – getting stuck in the trap of a shaky mirage – is addiction. As the algorithm tries to get out of the pit, the human mind gets stuck in it.Algorytms use us to make profits for the big companies and that’s the problem, they sell us items that we do not need… we just consume…

Report

What do you think?

Written by Katty Luna

2 Comments

Leave a Reply

Leave a Reply