• 337 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle






















  • How does this analogy work at all? LoRA is chosen by the modifier to be low ranked to accommodate some desktop/workstation memory constraint, not because the other weights are “very hard” to modify if you happens to have the necessary compute and I/O. The development in LoRA is also largely directed by storage reduction (hence not too many layers modified) and preservation of the generalizability (since training generalizable models is hard). The Kronecker product versions, in particular, has been first developed in the context of federated learning, and not for desktop/workstation fine-tuning (also LoRA is fully capable of modifying all weights, it is rather a technique to do it in a correlated fashion to reduce the size of the gradient update). And much development of LoRA happened in the context of otherwise fully open datasets (e.g. LAION), that are just not manageable in desktop/workstation settings.

    This narrow perspective of “source” is taking away the actual usefulness of compute/training here. Datasets from e.g. LAION to Common Crawl have been available for some time, along with training code (sometimes independently reproduced) for the Imagen diffusion model or GPT. It is only when e.g. GPT-J came along that somebody invested into the compute (including how to scale it to their specific cluster) that the result became useful.


  • This is a very shallow analogy. Fine-tuning is rather the standard technical approach to reduce compute, even if you have access to the code and all training data. Hence there has always been a rich and established ecosystem for fine-tuning, regardless of “source.” Patching closed-source binaries is not the standard approach, since compilation is far less computational intensive than today’s large scale training.

    Java byte codes are a far fetched example. JVM does assume a specific architecture that is particular to the CPU-dominant world when it was developed, and Java byte codes cannot be trivially executed (efficiently) on a GPU or FPGA, for instance.

    And by the way, the issue of weight portability is far more relevant than the forced comparison to (simple) code can accomplish. Usually today’s large scale training code is very unique to a particular cluster (or TPU, WSE), as opposed to the resulting weight. Even if you got hold of somebody’s training code, you often have to reinvent the wheel to scale it to your own particular compute hardware, interconnect, I/O pipeline, etc… This is not commodity open source on your home PC or workstation.


  • The situation is somewhat different and nuanced. With weights there are tools for fine-tuning, LoRA/LoHa, PEFT, etc., which presents a different situation as with binaries for programs. You can see that despite e.g. LLaMA being “compiled”, others can significantly use it to make models that surpass the previous iteration (see e.g. recently WizardLM 2 in relation to LLaMA 2). Weights are also to a much larger degree architecturally independent than binaries (you can usually cross train/inference on GPU, Google TPU, Cerebras WSE, etc. with the same weights).




  • From my own statistics how many I feel worthy posting/linking on Lemmy, the most direct alternative to Kotaku is Eurogamer. PCGamer, PCGamesN and Rock Paper Shotgun are occasionally OK, but you have to cut through a lot of spam and clickbait (i.e. exactly this “50 guides per week” type of corporate guidance). Not sure if this is also the state that Kotaku will end up in. The Verge sometimes also have good articles, but the flood of gadget consumerism articles there is obnoxious.



  • ylai@lemmy.mlOPtoLinux@lemmy.mlFUSE Passthrough Mode Merged For Linux 6.9
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    8 months ago

    Well, if you have a constructive suggestion which site to link instead regarding kernel developments, I am all ears:

    • Not sure that raw commits are readable or have sufficient context for non kernel development readers here
    • LWN, particularly timely/kernel development news there, has gone mostly paywall, and there will be (legitimate) complaint if I link articles needing a LWN subscription

  • ylai@lemmy.mlOPtoLinux@lemmy.mlFUSE Passthrough Mode Merged For Linux 6.9
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    8 months ago

    Not sure what called for this blatant personal attack. My post history speaks for itself, quite in comparison to yours. And Phoronix is well-known Linux website, and its test suite is in fact even referenced in various regression tests/patches in LKML (also not sure what/if any kind of kernel development you have done).







  • There might be several misunderstandings:

    • Docker Desktop ≠ Docker Engine, and I think what you (and several in this thread) are thinking is actually Docker Engine. Docker Desktop ultimately includes a Docker Engine inside, but it does not appear you need that virtual machine (e.g. running non-Linux code). See: https://docs.docker.com/desktop/faqs/linuxfaqs/#what-is-the-difference-between-docker-desktop-for-linux-and-docker-engine
    • Docker Desktop is based on KVM, which already works with Flatpak. So this is not something new. For example, GNOME Boxes is available as Flatpak and provides a way to run KVM guests in SteamOS.
    • Starting with version 3.5 (the current stable) SteamOS already includes Podman with the default installation. And running the daemon-y Docker Engine “bare metal” is not going to be any easier with the immutable filesystem. While Docker Desktop solves this by using KVM, it adds another layer with performance loss, vs. just running Podman containers.

    So what you want is already available, and no Docker Desktop is actually needed.


  • ylai@lemmy.mltoMemes@lemmy.mlHow does she know...
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    11 months ago

    AMD’s support for AI is just fine

    This is quite untrue, especially if you do actual research and not just run other people’s models. For example, ROCm is missing in many sparse autograd frameworks, e.g. pytorch_sparse, or having a viable alternative to Nvidias MinkowskiEngine. This is needed if you do any state-of-the-art convnets with attention-like sparsity.