• 0 Posts
  • 76 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • I struggled with this for a long time, and then I just decided to use synology photos.

    It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it’s on my NAS and I back that data up again.

    I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.

    I didn’t try PiGallery2 though, maybe I will have a look!




  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I really thought swarm was dead :)

    To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!

    Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.


  • Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.

    Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.

    That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.


  • Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.

    You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.

    I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.




  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    You have a bunch of options:

    kubectl run $NAME --image=$IMAGE
    

    this just creates a pod running the specific image. If you kill the pod, or it terminates, it won’t be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:

    kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
    # edit mypod.yaml
    kubectl create -f mypod.yaml
    

    You can do the same with a deployment or statefulset:

    kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml
    

    In case you don’t need anything fancy, the kubectl create subcommand allows you to create simple workload, so probably that’s the answer to your question.



  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    8 months ago

    I would say Docker. There is no substantial benefit in running podman, while docker is a widely adopted tool (which means more tooling in the ecosystem, easier to find answers to questions etc.). The difference is not huge tbh, and some time ago the biggest advantage for podman was being able to run rootless, while docker was stuck with a root daemon. This is not the case anymore (docker can run rootless), so I would say unless you have some specific argument to use podman, stick with docker.






  • Hey, that’s actually a very nice project, and to be honest, I can kinda imagine that the saving is minimal if there at all, in terms of time. Partially, I think this is also due to the fact that we are talking about super small amounts of time anyway! Moving files around I think it’s totally fast with a mouse, and in general I still do it like that. For me speed is really a secondary thing, it’s about ergonomics and limiting my movements. Chances are, I am already writing on the keyboard when I want to do something, so it might not be faster to switch to browser with mod+2 and back to terminal with mod+1, but it’s less movement to find the mouse, rotate the shoulder (my split kb is open at shoulder width) etc. Also I think I would argue that requires less focus because it’s inherently more mechanic as an action compared to find a button and click, or dragging and dropping something. Either way, it’s for sure something interesting to look at!


  • Oh no, I get it, I do have a work-issued macbook pro which I am currently not using in favour of a Linux machine. The main reason for me is ergonomics. My laptopt is closed in a vertical stand, and I cannot imagine myself moving the hands so much do to stuff. I do basically everything what the trackpad does with i3 keybindings, which I find not only faster, but also allow me to reduce movement of my arms and ultimately limiting wrist/arms stress.

    Obviously I completely agree that if one has or prefers to work with trackpads, apple ones are honestly great.


  • If there is already another reverse proxy, doing this IMHO is worse than just running a container and adding one more rule in the proxy (if needed, with traefik it’s not for example). I also build all my servers with IaC and a repeatable setup, so installing stuff manually breaks the model (I want to be able to migrate server with minimal manual action, as I had to do it already twice…).

    The job is simple either way, I would say it mostly depends on which ecosystem someone is buying into and what secondary requirements one has.