Elon Musk‘s X failed to block a California law that requires social media companies to disclose their content-moderation policies.

U.S. District Judge William Shubb rejected the company’s request in an eight-age ruling on Thursday.

“While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law,” Shubb wrote, per Reuters.

The legislation, signed into law in 2022 by California Gov. Gavin Newsom, requires social media companies to publicly issue their policies regarding hate speech, disinformation, harassment and extremism on their platforms. They must also report data on their enforcement of these practices.

  • Fisk400@feddit.nu
    link
    fedilink
    arrow-up
    86
    arrow-down
    1
    ·
    6 months ago

    Am I correct in saying that the law is a transparency law and not a moderation law?

    • Drivebyhaiku@lemmy.world
      link
      fedilink
      arrow-up
      39
      arrow-down
      2
      ·
      edit-2
      6 months ago

      It would appear so but anything to do with digital spaces are murky.

      As we kind of treat digital space the way we do physical space aince the digital space is owned the people who own it get to set the rules and policies which govern the space… But just like a shopping mall can’t eject you for the sole reasoning of you being a specific race certain justifications within moderation policies are theoretically grounds for constitutional protections.

      However it is a fucking mess to try and use a court to actually enforce the laws like we do in physical spaces. Like here in Canada uttering threats and performing hate speech to a crowd and scribbing swastikas on things for instance are illegal. But do that over a video game chat or some form of anonymizing social media and suddenly you’re dealing with citizens of other countries with different laws, a layer of difficulty in determining the source that would require a warrant to obtain and even if both people are Canadian you would need a court date, documentation that the law was appropriately followed in obtaining all your evidence, proving guilt, deciding where the defendant must physically show up to defend themselves and even if they do prove assault by uttering threats or hate speech violations… They would probably just get a fine or community service.

      Nobody has time for that.

      So if you want to enforce the protections of these laws either you hold the platform responsible for internal policing of the law and determine whether it is discharging it’s duty properly by giving citizens a means to check for and report violations of it’s own internal policies for later reveiw and give them means to pursue civil cases… Or you go hands off and create means to give a platform’s users means to check and make informed choices based on their own personal standards and ethical principles. Every moderation policy leaves a burden on someone but the question is who.

      So it might be a transparency law but it also opens the door for applying - Constitutional civil rights law protections to users by holding the business accountable if there are glaring oversights in their digital fifedoms…but such laws are basically inert until someone tries to challenge them.

        • Drivebyhaiku@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          6 months ago

          In Canada it’s partial mix of protections granted from Charter rights and expanded by the Human Rights Act to apply more universally but in the US you’re right, it’s covered just under the civil rights act I think?

          I may have slipped into common error by mentioning constitutional affairs where it doesn’t belong.

          • DreamlandLividity@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            6 months ago

            I deleted a few seconds after commenting, since I remembered there was a mention of Canada and realized that is different in other countries. But I guess you still managed to reply :) Sorry, my bad

            • Drivebyhaiku@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              Still a valid point though. Canadians are a minority on platforms so most of the time I try my best to base my replies to be as generally applicable to American systems as possible… American law is less approachable I find but a Canadian enthusiast of legal philosophy generally learns both systems to be able to have conversations with a wider audience.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      6 months ago

      As well as what the other comment says, it also allows people/businesses to see if their moderation is appropriate for them and decide to use or not to use the platform depending on that. Transparency can cause moderation.

    • Drivebyhaiku@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      It would appear so but anything to do with digital spaces are murky.

      As we kind of treat digital space the way we do physical space aince the digital space is owned the people who own it get to set the rules and policies which govern the space… But just like a shopping mall can’t eject you for the sole reasoning of you being a specific race certain justifications within moderation policies are theoretically grounds for constitutional protections.

      However it is a fucking mess to try and use a court to actually enforce the laws like we do in physical spaces. Like here in Canada uttering threats and performing hate speech to a crowd and scribbing swastikas on things for instance are illegal. But do that over a video game chat or some form of anonymizing social media and suddenly you’re dealing with citizens of other countries with different laws, a layer of difficulty in determining the source that would require a warrant to obtain and even if both people are Canadian you would need a court date, documentation that the law was appropriately followed in obtaining all your evidence, proving guilt, deciding where the defendant must physically show up to defend themselves and even if they do prove assault by uttering threats or hate speech violations… They would probably just get a fine or community service.

      Nobody has time for that.

      So if you want to enforce the protections of these laws either you hold the platform responsible for internal policing of the law and determine whether it is discharging it’s duty properly by giving citizens a means to check for and report violations of it’s own internal policies for later reveiw and give them means to pursue civil cases… Or you go hands off and create means to give a platform’s users means to check and make informed choices based on their own personal standards and ethical principles. Every moderation policy leaves a burden on someone but the question is who.

      So it might be a transparency law but it also opens the door for applying Constitutional protections to users by holding the business accountable if there are glaring oversights in their digital fifedoms…but such laws are basically inert until someone tries to challenge them.

    • xor@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      no, it’s transparency about moderation… :

      under AB 587, a “social media company” that meets the revenue threshold must provide to the California AG: A copy of their current terms of service. Semiannual reports on content moderation. The semiannual reports must include: (i) how the terms of service define certain categories of content (e.g., hate speech, extremism, disinformation, harassment and foreign political interference); (ii) how automated content moderation is enforced; (iii) how the company responds to reports of violations of the terms of service; and (iv) how the company responds to content or persons violating the terms of service. The reports must also provide detailed breakdowns of flagged content, including: the number of flagged items; the types of flagged content; the number of times flagged content was shared and viewed; whether action was taken by the social media company (such as removal, demonetization or deprioritization); and how the company responded.

  • Leviathan@lemmy.world
    link
    fedilink
    arrow-up
    53
    arrow-down
    1
    ·
    6 months ago

    It’s baffling that some people are convinced that he’s fighting the good fight for them. The absolute donuts.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    20
    arrow-down
    2
    ·
    6 months ago

    “We just want you to be honest with us.”

    “What? That’s outrageous! We’d never do another dime of business if we aren’t allowed to lie!”

  • fne8w2ah@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    And comes after the EU decided to actually start enforcing its moderation laws! #BrusselsEffect