IMO (neo)VIM is great for writing text as well, when all you need is markdown level formatting. Personally I use vimwiki a lot (many years by now).
IMO (neo)VIM is great for writing text as well, when all you need is markdown level formatting. Personally I use vimwiki a lot (many years by now).
In my experience, getting one can be more about politics and fulfilling certain management checkboxes than about technical skill and experience.
Assuming one hears their own voice in recorded form enough times, that “strange” feeling it might give at first subsides.
And it will find you the most answers online in case you have a git related question.
Oh boy… can’t promise you that I will last that long. I know it sounds pathetic, but is replying to one’s own comment an option (just for stress testing)?
Just looked it up a bit: https://microsoft.github.io/monaco-editor/
AFAIU, monaco
is just about the editor part. So if an electron application doesn’t need an editor, this won’t really help to improve performance.
Having gone through learning and developing with electron
myself, this (and the referenced links) was a very helpful resource: https://www.electronjs.org/docs/latest/tutorial/performance
In essence: “measure, measure, measure”.
Then optimize what actually needs optimizing.
There’s no easy, generic answer on how to get a given electron app to “appear performant”. I say “appear”, because even vscode
leverages various strategies to appear more performant than it might actually be in certain scenarios. I’m not saying this to bash vscode, but because techniques like “lazy loading” are simply a tool in the toolbox called “performance tuning”.
BTW: Not even using C++ will guarantee a performant application in the end, if the application topic itself is complex enough (e.g. video editors, DAWs, etc.) and one doesn’t pay attention to performance during development.
All it takes is to let a bunch of somewhat CPU intensive procedures pile up in an application and at some point it will feel sluggish in certain scenarios. Only way out of that is to measure where the actual bottlenecks are and then think about how one could get away with doing less (or doing less while a bunch of other things are going on and then do it when there’s more of an “idle” time), then make resp. changes to the codebase.
Kinda, but that’s “rainbows”.
Status quo (your comment): 8 x 7 + 1 = 57
sure is bunch of stripes.
I see. 9th rainbow, here we go.
Oh wow, even if you put it in landscape? In either case, lemmy’s web interface hides a lot of context by default when answering via the “messages” notifcation. So in a sense, with that one could reply endlessly. Then again, that’s not part of our experiment I’d say.
Right, that’s how it all started.
I just unfolded everthing. Seems we are on the 8th rainbow. Almost looks like on my phone, while in potrait mode, 10 rainbows will likely have it filled up.
Alright, second season, here we go!
Ha, for sure I missed the other comment…
Yeah, that browser zoom. And I too used / use Firefox. I’m not saying these kind of sites are common, but nevertheless I’ve encountered them occasionally. Back then, the most pragmatic workaround was to use desktop zooming of Xfce.
My intention on the previous comment was simply to give some examples of desktop zooming that go beyond the typical accessibility viewpoint (e.g. vision impairment).
That’s why regular backups are advisable.
Yeah, AFAIR, the issue of “windows messing up grub” could happen when it’s installed on the same disk (e.g. on a laptop with one disk). Something about it overwriting the “MBR sector”. At least that was a problem back before UEFI.
I too have been dual booting Windows 10 and Linux for many years now, each having their own physical disk, Linux one always being first in boot order. Not once did a Windows 10 update mess up grub for me with this setup.
Not the same as “on demand zooming”, which let’s one stick with a high, native resolution, but zoom in when required (e.g. websites with small text that can’t be zoomed via browser’s font size increase; e.g. referencing some UI stuff during UI design, without having to take a screenshot and pasting + zooming it in e.g. GIMP).
You didn’t mention how big those volumes are and how frequently the data changes.
Assuming it’s not that much data:
tar
to archive each volume first, while using proper options to preserve permissions and whatever else is important for your usecaserestic
, but maybe you can backup those archives separately and apply a more aggressive pruning strategy just for themHonestly, if all you’ve ever experienced in regards to terminals is windows CMD, then you really haven’t seen much. I mean that possitively. Actually, it will give you a far worse impression on what using a Linux / Unix terminal can be like (speaking as someone who spent what feel’s like years in terminals, of which the least amount in windows CMD).
I suggest to simply play around with a Linux terminal (e.g. install VirtualBox,.then use it to install e.g. Ubuntu, then follow some simple random “Linux terminal beginner tutorial” you can find online).
One takeaway from this surely is that such deeply nested endeavours sure are easily missed.
I went through setting up netdata for a sraging (in progression for a production) server not too long ago.
The netdata docs were quite clear on that fact that the default configuration is a “showcase configuration”, not a “production ready configuration”!
It’s really meant to show off all features to new users, who then can pick what they actually want. Great thing about disabling unimportant things is that one gets a lot more “history” for the same amount of storage need, cause there are simply less data points to track. Similar with adjusting the rate which it takes data points. For instance, going down from default 1s internal to 2s basically halfs the CPU requirement, even more so if one also disables the machine learning stuff.
The one thing I have to admit though is that “optimizing netdata configs” really isn’t that quickly done. There’s just a lot of stuff it provides, lots of docs reading to be done until one roughly gets a feel for configuring it (i.e. knowing what all could be disabled and how much of a difference it actually makes). Of course, there’s always a potential need for optimizations later on when one sees the actual server load in prod.