Apologies in advance for sharing two link posts here two days in a row. Unemployment may be driving me a little nuts… 😅

I’ve been working on Satounki since I got laid off last month. It’s the culmination of a lot of experience building similar ad-hoc internal tooling at various places throughout my professional career.

Satounki already includes:

  • AWS support
  • GCP support
  • Cloudflare support
  • Auto-generated Terraform providers from the Rust API
  • Auto-generated Typescript client wrapper from the Rust API
  • Slack bot for request notifications, approvals and rejections
  • CLI for requests, approvals and rejections
  • Dashboard for exploring policies, requests and stats

The scope of this project is pretty big and I’m looking for contributors.

The majority of the project is written in Rust, including the generated Go and TS code. The stack is pretty simple; Actix, Diesel, SQLite, Tera etc., so if you have experience with writing web apps in Rust it should feel familiar!

Even if this is a totally new stack to you, this is a great project to develop some familiarity and experience with it, especially if you can help improve the quality of the generated Go and TS code at the same time!

  • lemmyvore
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    If I’m reading the Github page correctly, the service serves as its own authoritative storage for the requests? If yes, that’s a blocker for most organizations that could use this because they’ll never trust a new tool like this with their sensitive data. It can be useful for modeling and orchestrating request flows that work with other APIs, provided it never touches data at rest.

    • Jeezy@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      What sort of sensitive data are you imagining in your reading of the README? It would be useful to understand to update the language appropriately 🙏

      • lemmyvore
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Well the actual rules — who gets access to what — where is that stored and how is it secured, to what standards? Is there logging, audit, non-repudiation, tamper-proof, time-stamping etc.

        • Jeezy@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          tl;dr all the same caveats with self-hosted software apply; don’t do anything you wouldn’t do with a self hosted database or monitoring stack.

          Well the actual rules — who gets access to what

          The rules themselves are the same public rules in the IAM docs on AWS, GCP etc., while the collections of these public rules (eg. the storage_analytics_ro example in the README) defined at the org level will likely be stored in two ways: 1) in a (presumably private) infra-as-code repo most probably using the Terraform provider or a future Pulumi provider, 2) the data store backing the service which I talk about more below.

          “Who received access to what” is something that is tracked in the runtime logs and audit logs, but as this is a temporary elevated access management solution where anyone who is given access to the service can make a request that can be approved or denied, this is not the right place or tool for a general long-lived least-privilege mapping of “this rule => this person/this whole team”.

          where is that stored and how is it secured, to what standards?

          This is largely up to the the team responsible for the implementation and maintenance, just like it would be for a self-hosted monitoring stack like Prom + Grafana or a self-hosted PostgreSQL instance; you can have your data exposed through public IPs, FQDNs and buckets with PostgreSQL or Prom + Grafana, or you can have them completely locked down and only available through a private network, and the same applies with Satounki.

          Is there logging, audit, non-repudiation, tamper-proof, time-stamping etc.

          Yes, yes, yes, yes and yes, though the degree of confidence in each of these depends to some degree on the competence of the people responsible for the implementation and the maintenance of the service as is the case with all things self-hosted.

          If deployed in an organization which doesn’t adhere to at least a basic least-privilege permissions approach, there is nothing stopping a bad internal actor with Administrator permissions wherever this is deployed from opening up the database directly and making whatever malicious changes they want.