..............
Iar fata aceea, iata,
Se uita la mine cu sufletul...
Nu, draga, nu te deranja sa ma iubesti.
O cafea neagra voi servi, totusi
Din mana ta.
Imi place ca tu stii s-o faci
Amara.
(Marin Sorescu)
..............

sâmbătă, 30 ianuarie 2021

scrisoare catre razvan - batman beginings

 

.

 pe scurt viata asa cum mi'a fost aratata mie (sa zicem ca asta ar fi un subtitlu)

nota explicativa: trezit in miez de noapte si e greu sa mai adorm. partial si pentru ca intunericul noptii face si gandurile sa fie mai negre-adanci. asa ca macar le scriu. dupa aproape o intreaga viata s'a intamplat sa se refaca niste legaturi cu colegi din scoala generala. R. este unul dintre ei, cel care mi'a provocat randurile de mai jos.

R. , tocmai mi'ai zis ca n'ai avut familie cu pedigree care sa iti asigure... ce trebuia.

o teorie a mea despre societate e una care afirma ca este decisiv nivelul la care familia asigura intrarea "vlastarelor" in sanul ei.

 pe scurt am avut o copilarie in care eram batut cu simt de raspundere cam o data sau de doua ori pe saptamana, lovit si jignit zilnic. de adultii care trebuiau sa aiba grija de dezvoltarea mea.

 acei adulti au decis ca eu care mergeam la olimpiade de mate si concursuri nationale de sah, propus insistent pentru internationale (chiar si in conditiile de mai sus) trebuie sa fiu dat la un liceu de mecanici auto.

 un prieten de familie cu influenta mi'a marturisit (pe la vreo 30 de ani) ca a fost ingrozit de asta, drept care i'a convins sa ma dea totusi la un liceu de ospatari-bucatari pentru ca va avea grija de mine sa ma bage undeva bine dupa liceu.

 la 14 ani fugit de acasa sa scap de teroarea in care "traiam". am locuit apoi cu alti adulti care nu au mai facut ca primii. insa mi'au oferit o adolescenta in frig, foamete, saracie.

acea saracie care te invata cum sa combini apa, paine, margarina, zahar si un resou, astfel incat sa fie gustoase. in functie de cativa parametri: daca painea e proaspata sau piatra-bocna, zaharul si margarina pot sau nu sa fie disponibile. pe de o parte pe atunci painea se intarea nu se mucegaia ca azi, pe de alta si noi o tineam in aer liber, nu in pungi tocmai ca sa evitam sa mucegaiasca, astfel ramanand totusi comestibila...

 prima admitere la facultate se poate spune ca (partial) a fost vina mea. eram indragostit lulea asa ca dupa ce am facut niste lucruri fabuloase in examenul de mate (din pacate dupa o greseala de scadere 3-1=1 in loc de 2), am iesit rapid sa o astept pe ea, sa fiu primul care o vede cand iese si ea de la examen. (o povestioara destul de suculenta, cu multe detalii interesante, nu o dezvolt acum)

bine, de vreo doua luni as avea o mica teorie - pentru ca tocmai am vazut un articol de neurologie care afirma ca dezvoltarea memoriei este puternic afectata de nivelul de angoasa din viata infanta. ceea ce ar explica de ce pentru mine a fost greu spre imposibil sa retin, sa tocesc. ergo, dificil sa iau examene de admitere in facultate unde nu ma puteam baza exclusiv pe inteligenta (matematica).

 apoi, de la 18 ani a trebuit sa muncesc. dar nu part-time, nu intern-ship... ci sa muncesc astfel incat sa imi asigur 100% hrana, imbracaminte si... acoperis! si daca se putea - bani de carti. sa mai si ies la un suc cu amici, cu fete... era ceva foarte amenintator pentru finantele mele. asta la 18, 20, 25 de ani...

toate astea se intamplau in acei ani '90-2000 cand oameni in toata firea ajungeau la sapa de lemn... si se traia crunt in Romania. somajul si inflatia faceau ravagii.

 ah, si ceva esential - a trebuit sa muncesc cu joburi gasite la ziar. aici e spilul. in acei ani pe de o parte, iar pe de alta parte fiind vorba de primele jobu'uri din viata care iti dau startul...! pentru ca in functie de ele CV'ul incepe sa limiteze. am mari dificultati sa cred ca familia chiar si fara pedigree nu ti'a ajutat. ca nu a fost un cunoscut de familie care sa fi zis ca stie un post liber sau ceva, chiar si cu concurs. unul de start, de inceput, de unde sa cresti. dar era de start nu de intepenit in el. chiar si in aceeasi companie diferenta intre cei angajati "la ziar" si cei recomandati era enorma. dincolo de diferentele colosale in "renumeratia dupa buget" oportunitatile de avansare nu existau pentru cei luati de la ziar. paradoxal, daca acestia chiar isi faceau treaba exemplar erau tinuti pe aceeasi pozitie tocmai pentru ca o ocupau extrem de eficient. asta se intampla si azi in corporatii, se vede la tot pasul. poate imi spui ca in subordinea ta nu se intmpla asa lucrurile. chiar si asa - la tine e o oaza. restul lumii nu functioneaza dupa regulile "tale" - si aici e conexiune obligatorie cu subiectul Universului, dar acest subiect alta data.

 astfel ca am dezvoltat doua vieti paralele.

 una in care eram condamnat la joburi de la baza piramidei, joburi prost platite si cumva jignitoare pentru capacitatile (si apoi) pregatirea mea. bucatar, zugrav, santierist, magazioner, contabilitate primara... orice, dar absolut orice gaseam la ziar. nota bene, nu imi perminteam sa selectez, iminenta foamei si ciudata nevoie de a nu trai in strada nu imi permiteau asta.

 si a doua existenta - viata personala - unde m'am aruncat in cultura. si in cautarea valorilor frumoase care imi lipseau in viata "cealalta". a doua existenta unde invatam si partea cu "Universul iti raspunde" dornic sa fie adevarat asa ceva, dornic sa apara si in viata mea aceasta... relatie. ca si asa de cand ma stiam oricum actionam aprioric dupa regulile binelui si frumosului - pentru ca asa simteam, nu pentru ca as fi cautat reciprocitatea Universului. (a fost un capitol frumos din viata - poate cel mai, mi'ar placea sa intram o data in subiect, insa e unul delicat)

 a doua existenta unde toate bune si frumoase (dincolo de frustrarea de a nu'mi permite sa cumpar destule carti) numai ca dupa ce interlocutorii erau fascinati de mine apareau intrebarile... "si cu ce te ocupi?", "ce facultate ai facut?". in my early 20's financiar nu imi permisesem sa mai dau a doua oara la facultate si eram bucatar. sau zugrav, sau naiba stie mai ce. si oamenii erau din nou uimiti dupa care... cautau sa vorbeasca cu altcineva mai important, pentru ca si ei erau in goana catre top of the food-chain. pe atunci era chiar jenant sa vorbesti cu un bucatar, acum e somehow la moda.

iar a doua problema de care m'am lovit este foarte succint formulata de musiu Liiceanu, care afirma in repetate randuri ca nu este intelectual acela care se refugiaza in cultura, acela "care foloseste cultura ca drog, ca o nevoie" (cum faceam eu), ci acela evident care va fi avand o familie care sa ii permita huzur pe bancile unei facultati ce promite un viitor financiar incert! eh well, pentru mine cultura era primordial o nevoie pentru a supravietui interior in conditiile vietii de mai sus. asa ca mi'a fost stopata "recunoasterea sociala" si pe aceasta directie.
m'ai intalnit o data la un grup de discutii, cu niste profesori universitari, oameni din cercetare, studenti, docenti... firesc, i'am uimit si pe ei de cateva ori (numai de cateva ori am deschis gura acolo) si le'am revolutionat putin modul de desfasurare a intalnirilor. (si asta e o povestioara in sine, una relativ draguta) dar tot outsider eram, nu aveam confirmarea oficiala a societatii.

si la cele de mai susmai adaugam si faptul ca pe piata muncii concuram cu indivizi care isi permiteau sa ceara trei sferturi, jumatate, sau chiar sa faca internship pentru posturi din care eu aveam nevoie sa imi asigur traiul de la a la z.

 si cele de mai sus fara sa fi mentionat de efectul culturii si inteligentei in mediile in care am fost obligat sa traiesc. deoarece cele doua erau evident incompatibilie. si aveam prostul obicei (inconstient) de a folosi un lexic prea bogat si mai rau - fara geseli gramaticale... deveneam instantaneu nesuferit. daca lexicul am invatat rapid sa il mai strunesc la nevoie, agramat nu puteam vorbi. asa'i pe santier. dupa ce ma cunosteau - ani mai tarziu - mai toti imi marturiseau cu remuscari ca se purtasera urat cu mine, ca de fapt sunt tip ok dar nu ma puteau suferi la inceput din cauzele de mai devreme, ca erau convinsi  ca "ma dau mare". pentru ca vorbeam corect...!
bogdan iti poate spune - un amic din copilarie mi'a zis ca nu m'i'a oferit un post intr'un birou desi ii aratasem ca invatasem de unul singur in timp record photoshop si corel la un nivel impresionant - pentru ca nu credea ca o duc atat de prost, iar el nu credea asta pentru simplul motiv ca toti stiau(!) ca "Liviu e inteligent, se descurca, nu are nevoie de ajutor de la noi".

iar efectul inteligentei... well, ajungeam foarte rapid sa excelez in orice noua meserie. drept care deveneam imediat "amenintator" pentru veterani. cred ca nu e nevoie de mai multe explicatii despre ce se intampla apoi.

 sa analizez societatea ca sa imi dau seama cum sa ajung in top of the food-chain?! man, insasi posibilitatea de a face acest lucru a fost un cadou oferit de familia ta "fara pedigree", cadou pe care putini il primesc. 

sa pornesc catre top of the food-chain abia pe la vreo 38-40 de ani mi s'a oferit posibilitatea. la vreo 20 de ani dupa ce ati pornit voi, ceilalti. in aceste conditii crezi ca ar mai fi o posibilitate reala?! mai ales ca la o astfel de viata cum am avut bateriile ti se consuma mult mai rapid. baterii care nici nu au apucat sa se formateze cum trebuie in copilarie.

din alta perspectiva, dupa o astfel de viata (plus cea a tatalui meu - un real geniu necunoscut, am aflat tarziu cine fusese el de fapt in sahul romanesc) una din cele mai puternice notiuni cu care poate ramane cineva ar fi "inutilitatea calitatii". conteaza mult mai mult de unde pornesti. dar am convingerea ca ca asemenea afirmatii te'ar antagoniza prea tare asa ca nu le pun aici.

.

 

vineri, 29 ianuarie 2021

O poza

scurta marturie despre ce inseamna sa lucrezi in IT si sa vrei acasa(!) sa arati o poza cuiva.

o... poza... una!

dupa ce cauti poza pe fb zici rapid fukit si incepi sa o cauti in calculator.
evident ca nu e in calculator ci pe un storage (NAS).
storage care da eroare la accesare (drive mapat prin network). da eroare si la accesarea paginii web de administrare. da eroare si la incercarea de accesare directa prin network share.
si de aici incepe dansul. pentru ca te gandesti rapid ca tocmai ai schimbat discurile (hardurile) din el. si ai refacut raid (o complicaciune care sa te fereasca de pierderea pozelor). si a mers... dar cine stie ce s'a intamplat cu discurile si poate raidul a facut plici. si te uiti dupa discurile vechi si te intrebi daca oare le montezi o sa mai fie ok... si camera incepe sa se invarta cand estimezi variantele posibile.
apoi iti amintesti ca ai fost curios si ai pus un adaptor wireless la NAS. si ai fost vesel cand ai reusit sa il setezi.
si mai apoi ai luat in graba cablul de retea de la NAS ca sa il pui unui nou device cumparat...
ceea ce inseamna ca exista sansa ca totul sa functioneze dar storage'ul are acum alt IP. intri rapid in router si incerci sa aloci un lease catre noul IP. scuze, am sarit peste faptul ca iti ia vreo 5 minute sa decoperi care este storage'ul din lista de clienti de pe router, pt simplul motiv ca adaptorul wireless fusese folosit pt un comp vechi si avea numele pc'ului respectiv. te'ai si intrebat cum de mai apare in lista computerul vechi dar ai zis ca rezolvi ciudatenia cu alta ocazie.
revenind, evident nu poti aloca adaptorului wireless lease pe IP'ul dorit pt ca e alocat altuia (conexiunii prin cablu care e inactiva acum si nu vrei sa strici setarea aia totusi).
asa ca de'a busilea pipai fire, muti la loc un cablu si... nu merge.
iti mai ia vreo 5 minute sa convingi storage'ul sa comunice pe cablu in loc de wireless.

gata!

poti sa trimiti poza pe care o cautai!

.

Re-Descoperire

 din nou, dar din alta directie descopar ca nu esti deranjat de mizeria umana daca nu ai avut de suferit direct din pricina ei.

 motiv pentru care cei care au puterea de a schimba sistemul nu o fac. pentru ei mizeria umana e subiect de film, cum sunt pentru noi vieti pe yachturi andocate la insule personale.

 asa ca ei sunt preocupati de pastrarea unui anumit statsus quo... in timp ce se intreaba daca nu cumva ar putea... inca o insula.

joi, 28 ianuarie 2021

Reclame, profit si stat totalitar


Incepe benign-boring. Era sa opresc.

Dar de la min 8 lucrurile se schimba.

Iti arata cum doar pentru a iti arata niste reclame care functioneaza sau mai mult nu platformele radicalizeaza comportamente - ca efect secundar de care nu sunt interesate dar nu il pot ocoli. Si vorbim de radicalizarea nu numai de comportamente benigne, dar mai ales ale celor cu probleme psihologice, sau comportamente ilegale, toxice pentru societate.

Iti arata cu cata usurinta platformele pot decide rezultate de alegeri, si ofera aceasta influenta ca produs, oficial.

Iar la sfarsit - ca bomboana de coliva - iti arata cum platformele de pe internet creaza infrastructura pentru instalarea unui stat totalitar perfect: cel de care nici nu stii ca exista, iar daca banuiesti nu ai nici cea mai mica sansa sa il dovedesti.

Si toate astea ca side effect(!!) cand se straduiesc doar sa iti aratate niste reclame relativ benigne.

Pe net gasesti la tot pasul baieti care se cred destepti cu "argumentul": "daca chiar tu crezi ca gandurile tale valoreaza ceva...". Fix in cazul celor in care gandurile lor nu valoreaza ceva problema supravegherii se pune mai abitir pentru ca in cazul lor influentarea este mai eficienta!

 

https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?utm_content=2021-1-27&utm_campaign=social&utm_source=facebook.com&utm_medium=social 

si transcript in caz ca se decide cineva sa scoata clipul de pe net:
00:04
So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that's a distant threat. Or, we fret about digital surveillance with metaphors from the past. "1984," George Orwell's "1984," it's hitting the bestseller lists again. It's a great book, but it's not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent.

01:17
Now, artificial intelligence has started bolstering their business as well. And it may seem like artificial intelligence is just the next thing after online ads. It's not. It's a jump in category. It's a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research. But to paraphrase a famous Hollywood philosopher, "With prodigious potential comes prodigious risk."

01:52
Now let's look at a basic fact of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We've all had the experience of being followed on the web by an ad based on something we searched or read. You know, you look up a pair of boots and for a week, those boots are following you around everywhere you go. Even after you succumb and buy them, they're still following you around. We're kind of inured to that kind of basic, cheap manipulation. We roll our eyes and we think, "You know what? These things don't work." Except, online, the digital technologies are not just ads. Now, to understand that, let's think of a physical world example. You know how, at the checkout counters at supermarkets, near the cashier, there's candy and gum at the eye level of kids? That's designed to make them whine at their parents just as the parents are about to sort of check out. Now, that's a persuasion architecture. It's not nice, but it kind of works. That's why you see it in every supermarket. Now, in the physical world, such persuasion architectures are kind of limited, because you can only put so many things by the cashier. Right? And the candy and gum, it's the same for everyone, even though it mostly works only for people who have whiny little humans beside them. In the physical world, we live with those limitations.

03:25
In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone's phone private screen, so it's not visible to us. And that's different. And that's just one of the basic things that artificial intelligence can do.

03:56
Now, let's take an example. Let's say you want to sell plane tickets to Vegas. Right? So in the old world, you could think of some demographics to target based on experience and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have a high limit on their credit card, or retired couples. Right? That's what you would do in the past.

04:19
With big data and machine learning, that's not how it works anymore. So to imagine that, think of all the data that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs that you uploaded there. If you start typing something and change your mind and delete it, Facebook keeps those and analyzes them, too. Increasingly, it tries to match you with your offline data. It also purchases a lot of data from data brokers. It could be everything from your financial records to a good chunk of your browsing history. Right? In the US, such data is routinely collected, collated and sold. In Europe, they have tougher rules.

05:15
So what happens then is, by churning through all that data, these machine-learning algorithms -- that's why they're called learning algorithms -- they learn to understand the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn how to apply this to new people. So if they're presented with a new person, they can classify whether that person is likely to buy a ticket to Vegas or not. Fine. You're thinking, an offer to buy tickets to Vegas. I can ignore that. But the problem isn't that. The problem is, we no longer really understand how these complex algorithms work. We don't understand how they're doing this categorization. It's giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it's operating any more than you'd know what I was thinking right now if you were shown a cross section of my brain. It's like we're not programming anymore, we're growing intelligence that we don't truly understand.

06:43
And these things only work if there's an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work. That's why Facebook wants to collect all the data it can about you. The algorithms work better.

07:00
So let's push that Vegas example a bit. What if the system that we do not understand was picking up that it's easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you'd have no clue that's what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, "That's why I couldn't publish it." I was like, "Couldn't publish what?" He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

07:58
Now, the problem isn't solved if he doesn't publish it, because there are already companies that are developing this kind of technology, and a lot of the stuff is just off the shelf. This is not very difficult anymore.

08:13
Do you ever go on YouTube meaning to watch one video and an hour later you've watched 27? You know how YouTube has this column on the right that says, "Up next" and it autoplays something? It's an algorithm picking what it thinks that you might be interested in and maybe not find on your own. It's not a human editor. It's what algorithms do. It picks up on what you have watched and what people like you have watched, and infers that that must be what you're interested in, what you want more of, and just shows you more. It sounds like a benign and useful feature, except when it isn't.

08:52
So in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there.

09:43
Well, you might be thinking, this is politics, but it's not. This isn't about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan. It's like you're never hardcore enough for YouTube.

10:04
(Laughter)

10:05
So what's going on? Now, YouTube's algorithm is proprietary, but here's what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can show them something more hardcore, they're more likely to stay on the site watching video after video going down that rabbit hole while Google serves them ads. Now, with nobody minding the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites and who have such explicit anti-Semitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit anti-Semitic content on their profile but who the algorithm detects may be susceptible to such messages, and lets you target them with ads, too. Now, this may sound like an implausible example, but this is real. ProPublica investigated this and found that you can indeed do this on Facebook, and Facebook helpfully offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google, and very quickly they found, yep, you can do it on Google, too. And it wasn't even expensive. The ProPublica reporter spent about 30 dollars to target this category.

11:53
So last year, Donald Trump's social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia, and I'm going to read exactly what he said. I'm quoting.

12:21
They were using "nonpublic posts whose viewership the campaign controls so that only the people we want to see it see it. We modeled this. It will dramatically affect her ability to turn these people out."

12:37
What's in those dark posts? We have no idea. Facebook won't tell us.

12:43
So Facebook also algorithmically arranges the posts that your friends put on Facebook, or the pages you follow. It doesn't show you everything chronologically. It puts the order in the way that the algorithm thinks will entice you to stay on the site longer.

13:02
Now, so this has a lot of consequences. You may be thinking somebody is snubbing you on Facebook. The algorithm may never be showing your post to them. The algorithm is prioritizing some of them and burying the others.

13:20
Experiments show that what the algorithm picks to show you can affect your emotions. But that's not all. It also affects political behavior. So in 2010, in the midterm elections, Facebook did an experiment on 61 million people in the US that was disclosed after the fact. So some people were shown, "Today is election day," the simpler one, and some people were shown the one with that tiny tweak with those little thumbnails of your friends who clicked on "I voted." This simple tweak. OK? So the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls. A fluke? No. Because in 2012, they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016 US presidential election was decided by about 100,000 votes. Now, Facebook can also very easily infer what your politics are, even if you've never disclosed them on the site. Right? These algorithms can do that quite easily. What if a platform with that kind of power decides to turn out supporters of one candidate over the other? How would we even know about it?

15:16
Now, we started from someplace seemingly innocuous -- online adds following us around -- and we've landed someplace else. As a public and as citizens, we no longer know if we're seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we're just at the beginning stages of this. These algorithms can quite easily infer things like your people's ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people's sexual orientation just from their dating profile pictures.

16:24
Now, these are probabilistic guesses, so they're not going to be 100 percent right, but I don't see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here's the tragedy: we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won't be Orwell's authoritarianism. This isn't "1984." Now, if authoritarianism is using overt fear to terrorize us, we'll all be scared, but we'll know it, we'll hate it and we'll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they're doing it at scale through our private screens so that we don't even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider's web and we may not even know we're in it.

18:13
So Facebook's market capitalization is approaching half a trillion dollars. It's because it works great as a persuasion architecture. But the structure of that architecture is the same whether you're selling shoes or whether you're selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that's what's got to change.

18:53
Now, don't get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don't work on the site, it doesn't work as a persuasion architecture, or its power of influence is of great concern. It's either one or the other. It's similar for Google, too.

20:16
So what can we do? This needs to change. Now, I can't offer a simple recipe, because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning's opacity, all this indiscriminate data that's being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. These structures are organizing how we function and they're controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they're free. In this context, it means that we are the product that's being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

22:14
(Applause)

22:21
So to go back to that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence and digital technology to blossom, but for that, we must face this prodigious menace, open-eyed and now.

22:39
Thank you.