I feel like the TikTok ban is only the start.

The US is pissed that it couldn’t 100% control the narrative on Israel genociding Palestine and sees the internet as the reason why. They’ve already put a lot of effort into homogenising and controlling the narrative on most big social media sites. I wouldn’t be surprised if they started cracking down more under the guise of “stopping misinformation”

  • Staines [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    96
    ·
    7 months ago

    I think it will be coming hard and fast.

    If you don’t move, you don’t notice your chains. Being censored directly on reddit was extremely radicalizing for a lot of people here. Once you’ve noticed the chains, it’s almost impossible to unsee them. Once you’ve had physical violence committed against you at a peaceful protest, you can’t forget just how thin the veneer of civility is. They’re creating an entire generation of people like us by actively censoring and over reacting. The illusion is shattered permanently for more people every day.

  • Greenleaf [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    65
    ·
    edit-2
    7 months ago

    Yes. For years now, when I’ve engaged with libs on the topic of free speech (usually w/r/t China), I point out that the amount of free speech the people of a country have is directly related to how much of a threat that speech is perceived to be by ruling powers. China has relatively free speech but it’s still a socialist country living in a capitalist world that wants it dead, so it’s not totally unfettered.

    Libs love to tout about how Americans have totally free speech (debatable, but still). But up until recently, free speech hasn’t been a threat to power in the US. So sure, let the peasants have free speech, it won’t actually change anything.

    Well now it seems that the ruling class do perceive a threat. They thought they could control speech in the internet age by making sure the biggest social media outlets are firmly under their thumb. They have Facebook, Google, and Twitter. But TikTok changed the game. The fact that it’s from China is a happy coincidence for them - if it was instead from a vassal state or some relatively powerless state outside their orbit, they would have muscled their way in.

    Not being able to control the narrative is a threat, so that speech needs to be restricted.

  • What_Religion_R_They [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    51
    ·
    7 months ago

    the us already has what’s in effect a great firewall/iron curtain. Most of the internet touches google/cloudflare and certain phrases are censored already as we saw with akamai and certain phrases, as well as certain Iranian site names at the hosting level. on the platform level the NSA, FBI, CIA, DOD, State dept already control the algorithm, we’ve known this since twitter files.

    there’s a difference between them doing something “legally” and already doing stuff

  • dislocate_expansion@reddthat.comB
    link
    fedilink
    English
    arrow-up
    49
    ·
    7 months ago

    I feel like we’ve already been seeing it with deplatforming and shadow banning, but the real kicker has been the social cooling due to surveillance and individual social anxiety. More to come is expected imo

  • CthulhusIntern [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    46
    ·
    7 months ago

    I feel like the constant warnings about misinformation is a way to manufacture consent for this. And like all good propaganda, there is truth behind it.

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      28
      ·
      7 months ago

      Oh yeah 100%. Liberals yelling about “Disinformatsiya” and other shit to try to make lying sound like a Russian plot. In practice “Misinformation” is anything they don’t like or that htey disagree with or that challenges there narrative. Doesn’t matter if it’s true or not, well documented or not. It’s a thought terminating cliche.

  • umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    7 months ago

    yes. and they will mainly be labeling and silencing us, the surveillance is built for this already. they are treating fascists in a much more lenient way online already.

    even irl, take the charlottesville protests: imagine how much police brutality there would have been if they were MLs instead of nazis.

    did the university fascist counterprotesters get the same violent treatment by the police as the antiwar people?

    e: lets organize and move to open platforms, folks. those are more resilient to censorship and surveillance.

    • porcupine@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      take the charlottesville protests: imagine how much police brutality there would have been if they were MLs instead of nazis.

      I don’t think there would have been police brutality, I think there would have been military brutality

  • marxisthayaca [he/him,they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    7 months ago

    We’ve been in a huge censoring wave for the last 8 years. Since Trump, violent and peaceful resistance to his actions get you delisted, shadow banned, etc. They make some gestures towards banning people on the other side, but society is awash with a miasma of white supremacy, that you cannot really ban it. You can just ban those that get a little too mean.

    Additionally, all of these platforms have spent humongous amounts of money basically banning words that more accurately explain the world, killing, murder, suicide, genocide, occupation, resistance, etc. We’ve found ways of circumventing it with dollar signs, blacking out words, emojis, etc. But it makes that information harder and harder to find.

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      26
      ·
      7 months ago

      Add in the extensive self-censorship people are doing (unalived) because they don’t know what will get them in trouble with the platforms. Now that I think about it I’m kind of surprised I don’t recall hearing “Chilling effect” recently.

  • it has long seemed to me that the censoring of information in the west is done through distraction and entertainment. there is so much media to consume and the most easily consumed has historically been the media that serves the interests of the powerful. this is still true, though the market concentration of legacy media ownership reached a crescendo just as the internet started to proliferate.

    capital has obviously inserted itself into the internet’s largest platforms, which all benefit from network effects. the effect that social media, like facebook and twitter, have had on the dissemination of news is hard to overstate. of course, the legacy platforms trying to differentiate themselves as being somehow more legitimate, but that distinction falls apart outside of obvious specific examples. the real difference is the level of interactivity in legacy media is non existent.

    legacy media has only ever been interested in creating one-way outputs: articles, videos, etc., where the forum of an engaged audience is presumed to exist and agree with the outputs. web 2.0 phenomenon has completely this blown up. nowhere is this more obvious and absurd than their curated “Town Hall” events where handpicked Joe Blow is brought in to ask an approved question from a note card, and this is meant to represent the public square.

    in any event, more to the question of censoring the internet, i think what we’re seeing is the attempt to bring the “public square” under some level of control. we all know that people arguing in the comments section is often more interesting and engaging than probably 90% media outputs. when that is taken away, people go elsewhere to do it. communities are still trying to find the level of moderation they desire for that kind of interaction. all the while, the established power structure is seeking to insert itself into that conversation within the largest communities. and yes, i think “preventing violent extremism” is the tactic that gives them the most leeway and power. “national security” implications give the most latitude in avoiding courts and issuing gag orders. “stopping misinformation” is probably going to be the framing that is used more broadly when some censorship becomes public. for example, though the laws around the banning of TikTok are all weird national security legalese, the way it’s being framed proponents of the ban is as a source of disinformation. i think this is because the national security argument has a better shot in legal interpretation than “people are lying on my internet program, ban the internet program”.

    a key piece of censoring the public square is to make sure the censorship itself doesn’t invite much attention or scrutiny.

    • axont [she/her, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      Yeah I came here to say this. Censorship in the west works through changing emphasis or floods of nonsense. Average people don’t want to sift through hours or footage or go to obscure forums. They want immediate information or the first thing they find that sounds right.

  • davel [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    7 months ago

    Many discussions about social media governance and trust and safety are focused on a small number of centralized, corporate-owned platforms that currently dominate the social media landscape: Meta’s Facebook and Instagram, YouTube, Twitter, Reddit, and a handful of others. The emergence and growth in popularity of federated social media services, like Mastodon and Bluesky, introduces new opportunities, but also significant new risks and complications. This annex offers an assessment of the trust and safety (T&S) capabilities of federated platforms—with a particular focus on their ability to address collective security risks like coordinated manipulation and disinformation.

    Centralized and decentralized platforms share a common set of threats from motivated malicious users—and require a common set of investments to ensure trustworthy, user-focused outcomes. Emergent distributed and federated social media platforms offer the promise of alternative governance structures that empower consumers and can help rebuild social media on a foundation of trust. Their decentralized nature enables users to act as hosts or moderators of their own instances, increasing user agency and ownership, and platform interoperability ensures users can engage freely with a wide array of product alternatives without having to sacrifice their content or networks. Unfortunately, they also have many of the same propensities for harmful misuse by malign actors as mainstream platforms, while possessing few, if any, of the hard-won detection and moderation capabilities necessary to stop them. More troublingly, substantial technological, governance, and financial obstacles hinder efforts to develop these necessary functions.

    As consumers explore alternatives to mainstream social media platforms, malign actors will migrate along with them—a form of cross-platform regulatory arbitrage that seeks to find and exploit weak links in our collective information ecosystem. Further research and capability building are necessary to avoid the further proliferation of these threats.