281 stories
·
19 followers

Trump Is Forever

1 Comment and 2 Shares
Trump Is Forever:

Either a year from now or five years from now, Donald Trump will step away from the presidency. Raise your hand if you think he will retire to Mar-a-Lago and delete his Twitter account.

It seems much more likely—maybe inevitable—that once he leaves office, Trump will continue to tweet and call in to cable news shows. Perhaps he will even attend political rallies, which is the part of the job he seems to enjoy most.

There is no reason to think—none at all—that he will discontinue his penchant for weighing in on American politics on an hourly basis. There is every reason to think that he will vigorously attack any Republican who was disloyal to him during his administration. Or retroactively criticizes his tenure. Or runs in opposition to one of his preferred candidates. Or jeopardizes any of his many and varied interests.

What this means is that there is no way for a Trump-skeptical Republican to simply wait out the Trump years. There will be no “life after Trump” because Trump is going to be the head boss of Republican politics for the rest of his days.

As I said at the beginning: Trump is not a caretaker of the Republican party. He is the owner.

Read the whole story
duerig
40 days ago
reply
That is the current calculus of many in the party leadership right now but it is incorrect. He is an old man. And 'elder statesmen' who have influence long after their political careers do that because they have gained respect during those years and husbanded that respect afterwards by not shouting at clouds. The only respect that Trump has right now is the assumed respect of his office. It is clear even from afar that nobody around him respects him. Not his staff, not his fellow republican leaders, and not the mainstream press. He wears the respect of his office like a stolen suit but the urine smell bites through the assumed respectability. That respect will evaporate as soon as he leaves office. And we can only hope that the day of his exit will be soon in coming.
Share this story
Delete

'Civilization' and Strategy Games' Progress Delusion

1 Comment and 2 Shares

So here’s a question: what author springs to mind when you read the word “evolution”?

Chances are, you thought of Charles Darwin. Problem is, the word evolution was never used in the first edition of On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. Darwin only began employing the term in the sixth edition, published thirteen years after the original (in 1859), precisely because it was already commonly known. And the person most responsible for the popularity of the term was a British philosopher named Herbert Spencer.

Spencer believed that progress was a cosmic phenomenon where all things advanced from simplicity to complexity. From geology to society, the entire universe was on a single trajectory of ever greater differentiation. And so evolution emerged as two different concepts under the same name: for Darwin it was adaptation, how species changed to suit their environment through the process of natural selection. For Spencer it was progress.

“Evolution as progress” became the bedrock of early social anthropology. The eighteenth century “savage” became the nineteenth century “primitive”, no longer something altogether different but instead just backward. “We” (whoever that is) were once like them, but “we” had evolved, whereas “they” had not. Or to put it another way, social evolutionism transformed a spatial difference (people who live in different parts of the globe do things differently) into a temporal difference (“they” do as “we” once did, but “we” have progressed and they have not).

'Crusader Kings 2' screenshot courtesy of Paradox Interactive

But let’s talk about videogames.

Now you don’t need me to tell you that the 4X genre is problematic (the four Xs stand for explore, expand, exploit, exterminate, after all). And I’d hazard to guess that most 4X developers take a systemic approach to game design which treats theme as a largely secondary issue (Sid Meier has repeated Bruce Shelley’s joke that they do their research in the kid’s section of the library [48 minutes into the linked recording]). But games are an artifact produced within a given social context and as such reproduce aspects of their worldview, particularly those aspects that are seen as being natural.

And what do we find in most historical 4X games? A largely uniform tech tree that all factions will progress through in a unilateral direction. Even non-historical 4X games feature uniform tech trees, they just use the present as a starting point and not an endpoint. But what is progress in an historical 4X game? To be blunt, it’s the elimination of difference. The closer you are to “us”, the more you have progressed. All Civ games begin with a settler unit, and your first choice is where to settle, to become sedentary. The first city being built you start transforming the surrounding environment, researching technologies and expanding until, by the end, you achieve hegemony over the world. Or rather, until you quit the game because you’re bored.

As Civ 5 dev Jon Shafer has noted, nobody finishes Civ games. Now I don’t believe there is one single reason for this, but I would argue that this evolutionary worldview is a reason. Games are supposedly a series of interesting decisions, but one of the dirty tricks of social evolution is to obfuscate political decisions under the guise of progress. Effectively your only decisions are how to advance through a predetermined trajectory culminating with “us”, "the US”. This is easier to perceive in tech trees, but it’s also true of those two other Xs: expand, exterminate. Make the world homogenous, make the world boring. Those early turns players like put them into contact with difference. The rest of the game sees them destroy it.

So hopefully it’s clear why it’s so heinously offensive to present day indigenous populations such as the Poundmaker Cree to be featured in games like Civilization. The implicit argument, even if unintentional, is that “we” are all playing the same game, you just sucked at it. Or look at Crusader Kings II that had a whole expansion (Sunset invasion) premised on the notion of the Aztec Empire invading and colonizing Europe.

But, you might argue, even if social evolutionism is offensive it might nonetheless be right, a harsh truth we need to come to terms with about “human nature”. After all wasn’t anthropology founded in accordance with this idea? But therein lies the problem, the idea of a single evolutionary ladder was the founding assumption of the discipline, an assumption that quickly ran into all sorts of problems. Here are some examples: horses are a new addition to the Americas, having arrived with European settlers and then gradually permeated throughout the continent. Before this, what are now known as the Indigenous people of the Great Plains seem to have been largely sedentary, becoming increasingly nomadic as they gradually developed an incredibly intricate and intense relationship with these animals. Similarly, there is significant archaeological evidence that many densely populated and interconnected centers began emerging in the Amazon (particularly in the Black and Xingu river valleys) from about 0 AD until roughly the thirteenth century, where the trend begins to reverse and habitation becomes increasingly decentralized and nomadic (needless to say, the arrival of Europeans greatly accelerated this process). This illustrates one of the problems with evolutionary views: They create rigid typologies (the rungs of the ladder) that break down very quickly given the incredible diversity of human populations. Even in Europe this should already have been clear: one of the oldest sites of sedentary habitation on the continent is the Iron Gates region of the Danube, where the Lepenski Vir I and II archaeological sites are located. Problem is, those populations don’t seem to have ever developed agriculture, which is what is “supposed” to happen.

But even if the idea of a single evolutionary ladder is discarded, there are still many problems with these conceptions of progress. If we go back to Spencer, we clearly see the idea that more advanced societies are more complex societies. This actually was one of the justifications that was given for studying Australian aboriginal religion by French sociologist Emile Durkheim: Since they supposedly had the simplest religion, it would be easier to derive the Elementary Forms of Religious Life (the title of his book) by observing them. Problem is, if you’re going to grade different peoples on their relative simplicity like some kind of Olympic judge, you first need to decide what the sport is. Nobody disputes that Indo-Europeans are great at making products, matter of fact “providers of merchandise” or “people of merchandise” is one of the most common names for the “white man” among Amazonian peoples. But what about everything else?

Because those very same Australian aboriginal populations who have been so continuously discriminated against by generations of academics have also developed the most complex kinship systems on the planet. There is, of course, much diversity between different populations, but many defy the limits of what can be modelled, and almost all require at least a three-dimensional diagram. Here, for example, is an attempt to model Murngin/Yolngu patricycles/matricyles using a five-dimensional hypercube:

Barbara Glowcezewski's kinship hypercube (Mankind, December 1989, vol. 19, no. 3)

By comparison most current day European kinship systems are among the simplest ever observed, and that’s the point: complexity and simplicity is very much in the eye of the beholder. Unsurprisingly, strategy games tend to only engage with complexity when it can be converted into a military or economic trait, the rest is treated as irrelevant or merely aesthetic. The tendency, when looking at different populations, is to fixate on familiarities, either because something appears similar or because something supposedly essential is missing. Much of anthropology up until the midpoint of the last century could be crassly summarized in the question “how come all these people don’t have a State?”

There are some signs of change (dare I say progress? Delete this stupid joke) in the genre though. The upcoming Humankind by Amplitude is aggressively signaling a break with 4x conventions, the stated goal being to write, not “win”, history. Among its most interesting ideas is that every age will afford the player an opportunity to play as a new culture, so one may select Babylonians during the Bronze Age and then Germans in the Iron Age. While the idea that societies progressed along set technological ages has by and large been discredited, the notion of changing cultures (rather than a continual atemporal people) is an important break with tradition.

Humankind screenshot courtesy of Sega

For now, strategy games by and large continue to reproduce notions of progress (particularly technological progress) in an uncritical fashion. Take efficiency, for example: It is common for new technologies in games to increase efficiency, which is almost always presented as unambiguously good. But while increased efficiency tends to either increase production or require less work, the practical downside is rarely modelled in games: the former increases the consumption of resources, the latter depresses wages. Being more advanced doesn’t make either inherently beneficial, or as famed science-fiction author Ursula K. Le Guin wrote “it seems fairly clear to me that to count upon technological advance for anything but technological advance is a mistake”.

These vague notions of progress perform a sort of magic trick, hiding political choices under a curtain of assumptions which continue to linger. Eventually anthropology moved on from social evolutionism, but the ideas stayed. Most people have never heard of Spencer, or the early anthropologists like Morgan, Tylor and Frazer, but their theories permeate the "common sense" that is reproduced in games, television, books etc. One explanation would be to credit these authors with having shaped the public consciousness, and that’s probably true to some extent, after all they got Darwin to start using the term "evolution". But we can also look at evolutionism another way: Not as some tenacious intellectual weed, but as a story people like to hear. “The west” played the universal game better than anyone else, “we” are the apex, “our” way is the only way. There will always be a market for reconfirming peoples’ beliefs, and games, being a product that is sold in a capitalist context, are particularly susceptible to this. Ideas never really die, they just find a new way to express themselves. While current anthropologists no longer entertain notions of human progress, games sure do .

The original Civilization was released September 1991, Francis Fukuyama proclaimed the triumph of liberal democracy in “The end of history and the last man” in 1992. Now, in 2019, it’s hard to find such a narrative outside of games. It is certainly possible that upcoming releases like Humankind and Ten Crowns may herald the end of this era for grand-strategy. Arguably Paradox saw much of its success by making games that are enjoyable as a collection of partial experiences rather than systems to be mastered in pursuit of victory. And while this can be interpreted exclusively as a question of game design, it contains an inherently political decision. Because if the 4x genre abandons the idea that history has (or will have) a victor, it also abandons a view of history that sees it as a competition between nations and/or races. And that would be no great loss.

Read the whole story
duerig
41 days ago
reply
The idea of history as a triumphant rising arc of progress is clearly wrong. Yet to discount progress at all seems just as wrong. The state of the entire world for almost all of history has been that of dire poverty even for those who were 'rich' but especially for the great mass of the people. Given that poverty is no longer universal, something has happened for the better. And it is therefore important for us to take inspiration from that change and figure out ways to make it both broader and more sustainable. We have an unclean legacy, but it is also a legacy of hope. We are not the inevitable apex of history. But our lives are better than those of our ancestors and we can work to make the lives of our descendants just as good and even better than our own.
Share this story
Delete

People who are given correct information still misremember it to fit their own beliefs

2 Comments and 3 Shares

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“People can self-generate their own misinformation. It doesn’t all come from external sources.” Researchers at Ohio State found that even when people are provided with accurate numerical information, they tend to misremember those numbers to match whatever beliefs they already hold: “For example, when people are shown that the number of Mexican immigrants in the United States declined recently — which is true but goes against many people’s beliefs — they tend to remember the opposite,” OSU’s Jeff Grabmeier reports in an article summarizing the research on Phys.org (the full paper is here).

110 people were presented with “short written descriptions of four societal issues that involved numerical information.” On two of the issues, the statistics fit the conventional wisdom; for two of the issues, the statistics belied it.

For example, most people believe that the number of Mexican immigrants in the United States grew between 2007 and 2014. But in fact, the number declined from 12.8 million in 2007 to 11.7 million in 2014.

After reading all the descriptions of the issues, the participants got a surprise. They were asked to write down the numbers that that were in the descriptions of the four issues. They were not told in advance they would have to memorize the numbers.

The researchers found that people usually got the numerical relationship right on the issues for which the stats were consistent with how many people viewed the world. For example, participants typically wrote down a larger number for the percentage of people who supported same-sex marriage than for those who opposed it — which is the true relationship.

But when it came to the issues where the numbers went against many people’s beliefs — such as whether the number of Mexican immigrants had gone up or down — participants were much more likely to remember the numbers in a way that agreed with their probable biases rather than the truth.

In a second study, participants played a “Telephone”-like game. The first person in the chain saw the correct “saw the accurate statistics about the trend in Mexican immigrants living in the United States (that it went down from 12.8 million to 11.7 million),” wrote those numbers down from memory and passed them on to a second person, who did the same with a third person, and so on.

Results showed that, on average, the first person flipped the numbers, saying that the number of Mexican immigrants increased by 900,000 from 2007 to 2014 instead of the truth, which was that it decreased by about 1.1 million.

By the end of the chain, the average participant had said the number of Mexican immigrants had increased in those 7 years by about 4.6 million.

“These memory errors tended to get bigger and bigger as they were transmitted between people,” [study coauthor Matt Sweitzer] said.

At least 30 journalists worldwide are imprisoned for spreading “fake news.” That’s a huge increase since 2012. At least 250 journalists are in jail worldwide for reasons related to their work, the Committee to Protect Journalists said this week in its annual report. Of those, “the number charged with ‘false news’ rose to 30 compared with 28 last year. Use of the charge, which the government of Egyptian president Abdel Fattah el-Sisi applies most prolifically, has climbed steeply since 2012, when CPJ found only one journalist worldwide facing the allegation.”

From The Washington Post:

It wasn’t this way five years ago, said Courtney Radsch, advocacy director of the CPJ, which tracks these trends.

In 2012, there was just one journalist in jail on fake-news charges. By 2014, there were eight. Then came 2016, when the most dramatic rise began, in which 16 journalists worldwide were in jail on fake-news charges. The number rose to 27 in jail by the end of last year.

Overall, between 2012 and 2019, there have been 65 journalists imprisoned on false-news charges. For comparison, since 1992, when the CPJ started tracking the trend, an overall 120 journalists have at one point been locked up for spreading so-called fake news. That means more than half of the journalists jailed on these charges were in prison sometime in the past seven years.

Most of the journalists who have been jailed on fake news charges over the past 7 years are in Egypt (7), followed by Turkey (6), Somalia (5), and Cameroon (5). Singapore passed a restrictive fake news law this year.

“Strategic intent is not strategic impact.” Digital disinformation campaigns can be large and organized — and still have very little impact on their targets, writes David Karpf, an associate professor of media and public affairs at George Washington University, in MediaWell. (MediaWell is run out of the Social Science Research Council, the independent nonprofit that, among other things, is assisting on the project that gets Facebook to share data with academics. It’s not going that well, reportedly!)

Much of the attention paid by researchers, journalists, and elected officials to online disinformation and propaganda has assumed that these disinformation campaigns are both large in scale and directly effective. This is a bad assumption, and it is an unnecessary assumption. We need not believe digital propaganda can “hack” the minds of a fickle electorate to conclude that digital propaganda is a substantial threat to the stability of American democracy. And in promoting the narrative of IRA’s direct effectiveness, we run the risk of further exacerbating this threat. The danger of online disinformation isn’t how it changes public knowledge; it’s what it does to our democratic norms. […]

The first-order effects of digital disinformation and propaganda, at least in the context of elections, are debatable at best. But disinformation does not have to sway many votes to be toxic to democracy.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Read the whole story
duerig
47 days ago
reply
I think we are all walking around with an incorrect folk theory about what happens in the mind. We picture memory as a book or hard drive that provides an accurate archive of all our experiences. And we think sometimes this memory flows into the 'reason' part of our brain which then yields 'truth'. And sometimes the memory flows into the 'biases and emotion' part of our brain which yields 'lies'. Stupid and evil people then rely a lot on the 'biases and emotions' part. While smart and honest people rely on the pure light of reason and end at the truth.

The actual mind is a lot messier and it must be because the universe is incredibly complicated. There is no direct line to truth and the reason we have biases and emotions is that they are part of our reasoning process. Memory is not a record of the past, but an active notebook that is constantly being updated, smudged, deleted, and referenced all at the same time. We are messy because that is the only way to be smart in a complicated world.

Studies like this don't show that people are stupid. They show that our simple theory of the mind is wrong. And that means that the promise of persuasion is wrong too. We think that persuasion means making a rational argument that compels the other parties faculty of reason. Or that it means making an appeal to their biases and emotions. But neither of these is the case. When you talk to somebody for ten minutes, you are giving them a little bit of experience which they then use to scribble in one of the tiny corners of their notebook. It won't make much of an impact because it is 10 minutes of conversation weighed against a lifetimes accumulation of their notebook. The only real way to persuade somebody of anything is to take up enough of their intellectual bandwidth that you are providing them with masses of experience to update their notebook with. This takes time and persistence. And it has to start with where they are and not where you are. And if you are literally having conversations with somebody, you will be just as changed by the experience as they will because they will have taken up your intellectual bandwidth for that time as well. It is only with broadcast media that an unchanged persuader is even possible. And then it isn't any particular argument or appeal but the weight of constant listening that is persuasive.

I'm not sure how to get out of the information minefield we are in today. But I do know that we first have to abandon the dream of immediate persuasion to get there. People will rationally discount sources that contradict things that seem true. And we rationally adjust and weed out memories that are inconsistent with each other and with what we think is true. These rational processes are the reason why that one-line zinger which seems like a knock-down argument always fails. Those who believe will nod and smile. Those who don't will shrug. It isn't their stupidity that you are contending against. It is their intelligence.
cjmcnamara
47 days ago
reply
Share this story
Delete
1 public comment
bogorad
47 days ago
reply
Duh! Ever heard of 'wizard's first rule'?
Barcelona, Catalonia, Spain

The “Harbinger Customers” Who Buy Unpopular Products & Back Losing Politicians

1 Comment and 3 Shares

Colgate Foods

This paper, about the curious phenomenon of “harbinger customers” and “harbinger zip codes”, is really interesting. These harbinger customers tend to buy unpopular products like Crystal Pepsi or Colgate Kitchen Entrees and support losing political candidates.

First, the findings document the existence of “harbinger zip codes.” If households in these zip codes adopt a new product, this is a signal that the new product will fail. Second, a series of comparisons reveal that households in harbinger zip codes make other decisions that differ from other households. The first comparison identifies harbinger zip codes using purchases from one retailer and then evaluates purchases at a different retailer. Households in harbinger zip codes purchase products from the second retailer that other households are less likely to purchase. The analysis next compares donations to congressional election candidates; households in harbinger zip codes donate to different candidates than households in neighboring zip codes, and they donate to candidates who are less likely to win. House prices in harbinger zip codes also increase at slower rates than in neighboring zip codes.

It’s fascinating that these people’s preferences persist across all sorts of categories — it’s like they’re generally out of sync with the rest of society.

Perhaps the most surprising aspect of the harbinger customer effect is that the signal extends across CPG categories. Customers who purchase new oral care products that flop also tend to purchase new haircare products that flop. Anderson et al. (2015) interpret their findings as evidence that customers who have unusual preferences in one product category also tend to have unusual preferences in other categories. In other words, the customers who liked Diet Crystal Pepsi also tended to like Colgate Kitchen Entrees (which also flopped).

(via bb)

Tags: politics
Read the whole story
duerig
48 days ago
reply
This makes a certain amount of sense when it comes to political candidates. But if a product is popular in some zip codes and not others then that product can be a success with a sustainable return. That makes me wonder if these people are less 'harbingers of failure' than 'seekers after novelty'. If they kept drinking Crystal Pepsi forever, then it would still be around. Maybe they latched onto Crystal Pepsi this month and then switched to purple Mountain Dew the next.
Share this story
Delete

Seeing Like a Finite State Machine

1 Comment and 2 Shares

Reading this tweet by Maciej Ceglowski makes me want to set down a conjecture that I’ve been entertaining for the last couple of years (in part thanks to having read Maciej’s and Kieran’s previous work as well as talking lots to Marion Fourcade).

The conjecture (and it is no more than a plausible conjecture) is simple, but it straightforwardly contradicts the collective wisdom that is emerging in Washington DC, and other places too. This collective wisdom is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition technology to distinguish ethnicity and identify enemies of the state, a new and far more powerful form of authoritarianism will emerge. Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

The theory behind this is one of strength reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to be told – of weakness reinforcing weakness. Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.

So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason.

So in short, this conjecture would suggest that  the conjunction of AI and authoritarianism (has someone coined the term ‘aithoritarianism’ yet? I’d really prefer not to take the blame), will have more or less the opposite effects of what people expect. It will not be Singapore writ large, and perhaps more brutal. Instead, it will be both more radically monstrous and more radically unstable.

Like all monotheoretic accounts, you should treat this post with some skepticism – political reality is always more complex and muddier than any abstraction. There are surely other effects (another, particularly interesting one for big countries such as China, is to relax the assumption that the state is a monolith, and to think about the intersection between machine learning and warring bureaucratic factions within the center, and between the center and periphery).Yet I think that it is plausible that it at least maps one significant set of causal relationships, that may push (in combination with, or against, other structural forces) towards very different outcomes than the conventional wisdom imagines. Comments, elaborations, qualifications and disagreements welcome.

Read the whole story
duerig
63 days ago
reply
The first rule of authoritarian regimes is that they need to project strength. Therefore it is foolish for those on the outside to take their claims of strength at face value. This also makes them much less resilient. They will be completely stable until they fracture. There is never any real secret sauce. There is only the decades or centuries long bluff that works until it is suddenly called.
freeAgent
63 days ago
I worked for a company that used voice printing tech to tag scammers who called into contact centers (typically to engage in financial fraud, e.g. getting new credit/debit cards sent to a new address). It worked in near real-time and was very accurate with a fairly small pool of known scammers (these guys are at it all day and will make dozens of calls each day). It would even catch the same person trying to disguise their voice, such as when a man tried to sound like a woman or vice versa. Unfortunately, we also had a reporting mechanism that was used to flag new calls. It turns out that there are a LOT of scammers out there. Over time, the voice print database of known scammers grew...and grew. At first this seems like an amazing thing! We have biometric data matching all the scammers! The problem is that with so many scammers out there and in our database, non-scammers are bound to end up producing hits and our accuracy dropped. Phone call audio is not particularly high-fidelity, so that didn't help things either. With that said, phone audio quality has been rapidly improving and this may be a more viable project in the future or even today. I wouldn't know, however, since I'm no longer in the space.
Share this story
Delete

Gen Breakfast

1 Comment

Follow @lamebook on instagram for more content!

Read the whole story
duerig
71 days ago
reply
Watching this movie now, I mostly sympathize with the thankless job of that poor vice-principal who had to deal with these five malcontents on his day off.
trevorjackson
66 days ago
I had a similar experience rewatching Empire Records and what terrible employees Joe had to deal with
Share this story
Delete
Next Page of Stories