Thoughts? Seems like we are one of the only bigger instances with open sign ups still as well.

Edit: didn’t mean to start any instance tribalism. Like others have said, its not a competition, and its better that users are spread across instances than piling into one. Just think it’s interesting to watch the different instances grow and change and see where people end up congregating.

  • Slashzero@hakbox.social
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    That’s right. I hope Lemmy.world isn’t heading in that direction and Ruud has to take it down because it costs too much money to run the servers.

    • pleasemakesense@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      Lemmy.world is hosted by the same people as mastodon.world, a top 10 biggest server on mastodon. If any instance is capable of dealing with the increased traffic it’s this one

    • Justin@lemmy.jlh.name
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      lemmy.world has the IT Operations technical experience to keep their site up, the others need to catch up.

      I’m developing some Kubernetes tools to make it easy for people to create their own scalable instances like lemmy.world has, so I’m hoping that that can make it a bit easier.

      • Slashzero@hakbox.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Sorry, let me clarify. My point was that I hope Ruud doesn’t have to start shelling out thousands of dollars because everyone decides to register there and he needs to keep increasing capacity.

        • deadcyclo@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          1 year ago

          Thing is, lemmy doesn’t support clustering/horizontal scaling. So there are limits to how much increasing you can do. You can beef up with a database cluster, add a separate reverse proxy, and increase the specs of the hardware lemmy is running on (but hardware can’t be expanded limitless), but that’s about it. Once you hit the limit of what a single instance of the lemmy software can handle, you cannot scale anymore. Pretty sure you will hit the limit long before you reach thousands of dollars.

          • Luca@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            Hopefully by then, Lemmy’s source is augmented to support HA/horizontal scaling

            • deadcyclo@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              Yeah. But horizontal scaling (well horizontal scaling in a system like this where you need clustering so the instances talk to each other) is hard. And I think there are a lot of other things that need to be polished, added and worked on before that. It would probably also need somebody with knowledge of clustering to start contributing. I think step 1 needs to be that the dev team needs more help properly tuning the database use. The database is very inefficient, and they lack the skill to improve it:

              We are in desperate need of SQL experts, as my SQL skills are very mediocre. https://github.com/LemmyNet/lemmy/issues/2877

              So getting help improving the database is probably the #1 thing that can be done to deal with the scaling problem.

              • Luca@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                1 year ago

                I fully agree - there’s no excuse why the DB should be falling over when I’ve seen a single postgres instance (with a read replica, granted) handle >1M users just fine.

                Unfortunately my SQL skills haven’t improved since my DB class in university, so I won’t be much help. I’ll be keeping an eye on the repo of course, and I can give some consulting/guidance or even open some PRs myself when they decide to implement horizontal scaling.

                • deadcyclo@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Yeah. I’m in the same boat. My SQL skills aren’t impressive either since there are other people at work that handle optimization. Haven’t used rust either (yet) so cannot really contribute there either. Though I’m considering potentially starting work on a cross platform mobile app. I haven’t worked with mobile apps for a good six or seven years, so I feel like it’s high time I get back up to speed. (But knowing me, I’ll end up making something half finished and the start procrastinating)

              • Justin@lemmy.jlh.name
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Yeah, I think a single postgres cluster with read replicas should be able to handle the needs of a single instance just fine. You can then horizontally scale the backend and frontend to keep up using containers.

                Ideally, that instance can scale up to a few million users, and then federation can provide the horizontal scaling that takes the lemmyverse up to Reddit scale.

                The backend just needs to handle databases better. Adding support for read replicas, making it more efficient, etc.

                Not sure how well pict-rs scales, but that’s probably pretty light already, vertical scaling might be good enough that it’ll always be limited by the DB.

                But yeah, I guess the worst case scenario is that Postgres doesn’t scale enough and we need to switch to something like cockroachdb. Or go for snowflake uuids and noSQL like Twitter did back in the day.

          • veroxii@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I believe that’s mostly because of websockets. The devs are changing the UIs from using websockets to just normal html rest api calls. That should help with a lot of load, since you will have to manually hit refresh to reload the site (like reddit), and will allow scaling horizontally since each html request doesn’t need to run on the same server as the previous request.

            • deadcyclo@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Getting rid of websockets would help a lot. But you still might not be able to have standalone nodes. You might still need a cluster of nodes with a master and slaves due to the federated nature of lemmy. Such that only one node at a time can handles federation events with other servers. I don’t know enough about the protocol to know if that is the case or not. Just as an example I’m thinking of situations where one node gets a federation event for example for a post, then a different node gets a federation event with some sort of change to that post, and handles it faster than the first node. That event would then fail because the post hasn’t been created yet.

              • veroxii@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Interesting. I didn’t think of that. Maybe some sort of queue would help, but yeah not sure. Maybe the protocol can handle that already. I’ll have to read through it at some point. :-)

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’m developing some Kubernetes tools to make it easy for people to create their own scalable instances like lemmy.world has, so I’m hoping that that can make it a bit easier.

        That sounds amazing.

        • Justin@lemmy.jlh.name
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          You can follow the discussion about Kubernetes configuration and official Kubernetes support over on Github

          I’m Justin on the issue thread.

    • Earthwormjim91@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Well by that point I imagine he would just shut down open registration and no new users could join this instance.

      And with the beauty of federation, that wouldn’t really prevent growth. People would just need to join another instance and they would still get all the content from here.