I’m really enjoying lemmy. I think we’ve got some growing pains in UI/UX and we’re missing some key features (like community migration and actual redundancy). But how are we going to collectively pay for this? I saw an (unverified) post that Reddit received 400M dollars from ads last year. Lemmy isn’t going to be free. Can someone with actual server experience chime in with some back of the napkin math on how expensive it would be if everyone migrated from Reddit?

  • panoptic@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    That’s what they’re saying.

    Essentially - if someone from the small instance subscribes to a community that has a ton of data (huge post volume, images, whatever), the small instance needs to pull data over from the larger instance. At some point there may be communities that are so large small instances can’t pull them in without tanking.

    • Silviecat44@vlemmy.net
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I wonder if there is a way to get around this? maybe smaller instances will have to be text-only?

    • honk@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      maybe I phrased that poorly and you didn’t understand what I was trying to say. The size of the bigger instance shouldn’t matter at all because only data from communities is pulled, that a member of the smaller instance is subscribed to. So if the bigger instance has 1000 members or 2 million members wouldn’t make a difference. The only thing relevant should be how active the communities are that members are subscribed to.

      • panoptic@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If I’m reading the protocol right, it’s probably larger instances that will avoid more duplication, since:

        1. There’s a higher chance they’re going to have more communities shared among users (for really tiny instances you’re probably going to get a lot of overlap since those people likely have interconnected interests, but I expect that would fall off quickly, but then converge at scale).
        2. The larger number of users will mean they ‘use’ more of the content they’re pulling down (I can’t read all of a highly active community in a day, but 1000 people together checking through the day might ‘use’ it all).

        I’m not sure I see where you see caching fitting in.
        I am surprised I don’t see some kind of lower resolution digest concept in the protocol (which might be what you’re looking for)