re 1: out of curiosity, do you encounter dnsleaks when using wireguard?
re 4: you can also check out https://starship.rs/, which helps configure shell prompt very intuitively with a toml file.
re 1: out of curiosity, do you encounter dnsleaks when using wireguard?
re 4: you can also check out https://starship.rs/, which helps configure shell prompt very intuitively with a toml file.
lol how did u do that?
Hold up, are you sure you can’t view Discussions or Wiki? Which sites can you not view them?
I’m fine viewing them for public repos that I usually visit.
Asking to make sure that Github is not slowly rolling out this lockdown.
the whole premise of OP is that this monitors people, and many organizations use TOTP, which one could also use without internet connections or phones AFAIK.
I’m in academia and I wish this is implemented more. Data breaches are getting quite common, and Github is so entwined in software engineering that it is critical to increase security measures.
or maybe most of them in a folder? and one file that defines their locations for environment variables
what are the other alternatives to ENV that are more preferred in terms of security?
yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.
edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.
I’m curious, why is this bot currently being downvoted for almost every comment it makes?
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
how bout baserow.io or nocodb cloud? Haven’t used them but I think they’re open source. But they don’t have mobile apps AFAIK for editing.
this is interesting, but it’s not open source yet? Couldn’t find the code. I only saw the author saying that the intent is to be open source.
I think apps like this is really interesting and could really benefit from selfhosting (either/both the LLM or the app deployment), especially due to the potential security/privacy issues, as well as lock-in issues with OpenAI.
got into coding cuz I found out that’s how I can automate analysis and play with research questions more easily.
I think many have also been wondering about version control of legislation/law documents for some time as well. But I never understand why it’s not realized yet.
As much as I despise snap, this instance bring some questions into how other popular cross-linux platform app stores like flathub and nix-channels/packages provide guardrails against malwares.
I’m aware flathub has a “verified” checks for packages from the same maintainers/developers, but I’m unsure about nix-channels. Even then, flathub packages are not reviewed by anyone, are they?
thanks for your answer! Is this same or different from indexing to provide context? I saw some people ingesting large corpus of documents/structured data, like with LlamaIndex. Is it an alternative way to provide context or similar?
I know nothing about “in context learning” or legal stuff, but intuitively, don’t legal documents tend to reference each other, especially the more complicated ones? If so, how would you apply in context learning if you’re not aware which ones may be relevant?
When you find one or successfully train one, I’d love to know as well. Maybe you can crosspost this on?
I saw this dataset on HuggingFace, does it fit your use case? https://huggingface.co/datasets/lexlms/lex_files
Not related to warp, but just out of curiosity, which protocols have you tried? In one or two univs I visited, I had to switch to TCP instead UDP for it to work. Not sure why.
Here are some options:
Based on this reddit comment, that website is not affiliated with the
magic-wormhole
CLI tool