What the fuck is with the person in the email chain implying not all drivers can be written in Rust? Rust does literally everything C can, nothing stopping you from using unsafe properly to achieve that.
A more serious answer would be that Rust can’t be compiled to all targets. There is a lot of work to get Rust to compile with gcc though, which would help with this tremendously.
That’s a performance optimisation which llvm is likely to do for you anyways, jump tables aren’t exactly rocket science. Gazing into my crystal ball, might have to turn your enum variant Foo(u8) with possible values 0…15 into Foo0 through Foo15 (or maybe better Foo(EnumWith16Variants)) so that the compiler doesn’t have to evaluate code to figure out that it doesn’t need to NOP the rest of the jump range out, or bail out of generating a jump table, whatever it would do.
Begone, vile creature, use call/cc if you’re into that kind of stuff. What you’re doing might be consensual but it sure ain’t safe or sane. Unless you’re implementingcall/cc, then I forgive you, and extend my condolences.
Longjmp in C is generally used for implementing exceptions or things of that sort, which is fine. Even Ada and Haskell have exceptions. C++ has them too, though maybe that doesn’t speak in their favor as much. Rust lacks them, but so far that seems to me to be a shortcoming in Rust. Even those wanting unwinding of error value checking through the entire call stack could look at the C++ deterministic exception proposal which is similar to Haskell’s ErrorT monad transformer.
Anyway, Rust has its own sort of call/cc thing as far as I can tell (not sure), in its async runtimes. There is a coroutine switching scheme underneath that, amirite? Where do the call stacks for the different async tasks live and how are they allocated? I’ve been wondering, “Comprehensive Rust” doesn’t go into any detail about this.
Call/cc in its most general form is possibly evil, but delimited continuations, the most common use of call/cc, are perfectly cromulent as far as I know. My brain is not currently big enough to understand the topic but I’m going by some of Oleg Kiselyov’s old writings, probably linked from here: https://okmij.org/ftp/continuations/index.html
Rust lacks them, but so far that seems to me to be a shortcoming in Rust
I think it’s a benefit. In Rust, you have two options:
panic - usually catchable with catch_unwind() (some types are not), and should be used rarely, this isn’t try/catch
result - return a value that indicates an error, like a super fancy errno, but with monad semantics that force you to acknowledge it
This is good for a few reasons:
“normal” errors are explicit in the return type; if a function doesn’t return a result, it can’t return errors
outliers aren’t accidentally handled - since you shouldn’t be catching panics, you can rely on them to completely crash
no need to instrument the code with unwinding logic in case something throws an exception, meaning a more efficient runtime
Where do the call stacks for the different async tasks live and how are they allocated?
The async runtime handles that. You’d need to look at the specific implementation to see how they handle it, the standard library only provides tools for dealing with async.
My understanding is that it works kind of like a generator, so any await call yields to the runtime, which resumes another generator. That’s an over simplification, but hopefully it’s close enough.
Call/cc
I think this is basically just the general idea of a generator, but the caller chooses where the yield yields to.
I agree with the criticisms of call/cc, and I think it’s much clearer to just use coroutines (of which generators are a special case) and otherwise linear control flow instead of messing about with continuations. IMO, the only people that should mess with continuations are lower level engineers, like driver authors, compiler devs, and async lib devs. Regular devs will just create more problems for themselves.
async is stackless coroutines, less powerful than stackful ones and vastly less powerful than first-class continuations, which is what call/cc provides, but also way more performant as there’s basically zero memory management overhead…
Where do the call stacks for the different async tasks live and how are they allocated?
That’s the neat thing: They aren’t, futures don’t contain their stack the compiler infers, statically, what they will need, and put that in a struct. As said, practically zero memory overhead.
First-class continuations aren’t evil they’re naughty, dirty only if used when not actually making anything clearer. Delimited continuations are actually a generalisation of call/cc and arguably way easier to think about, it’s just that call/cc predates their discovery and made it into standard lisp and scheme, where is that Felleisen paper, here: 1988. 13 years after scheme, 28 after lisp. longjmp/setjmp exist since at least System V and C89 according to the linux man pages, I guess they implemented it to implement lisp in. It’s just that C is about the worst language to use when you’re writing anything involving continuations, everything gets very complicated very fast, and not because continuations are involved but because you have no gc, no nothing, to support you.
Scheme has call/cc but standard Lisp (i.e. Common Lisp) doesn’t. Hmm, Rust async is like C++20 coroutines, so they can only yield from the outermost level of the task etc.? (Added: C Protothreads also come to mind). That sounds constraining even compared to the very lightweight Forth multitaskers of the 1970s. Python’s original generators were like that, but they fixed them later to be stackful coroutines. ~And do you mean there is something like a jump table in the async task, going to every possible yield point in the task, such as any asynchronous i/o call? That could be larger than a stack frame, am I missing something?~ (No that wouldn’t be needed, oops).
Setjmp/longjmp in C had some predecessor with a different name, that went all the way back to early Unix, way before C89. It was routinely used for error handling in C programs.
I’ve implemented Lisp (without call/cc) in C using longjmp to handle catch/throw. It wasn’t bad. Emacs Lisp also works like that.
I had been pretty sure that delimited continuations were strictly less powerful than first-class continuations, but I’ll look at the paper you linked.
I found some search hits saying you were supposed to implement signal handlers in Rust by setting a flag in the handler and then checking it manually all through the program. Even C lets you avoid that! It sounds painful, especially when there can be asynchronous exceptions such as arithmetic error traps (SIGFPE) that you want to treat sanely. But I haven’t looked at any Rust code that does stuff like that yet.
Hmm, I wonder if it’s feasible to use the Boehm garbage collection method in Rust, where the unsafe region is limited to the GC itself. Of course it would use pointer reversal to avoid unbounded stack growth. Of course I don’t know if it’s feasible to do that in Ada either.
What the fuck is with the person in the email chain implying not all drivers can be written in Rust? Rust does literally everything C can, nothing stopping you from using unsafe properly to achieve that.
A more serious answer would be that Rust can’t be compiled to all targets. There is a lot of work to get Rust to compile with gcc though, which would help with this tremendously.
Gnu C has computed goto.
That’s a performance optimisation which llvm is likely to do for you anyways, jump tables aren’t exactly rocket science. Gazing into my crystal ball, might have to turn your enum variant
Foo(u8)
with possible values 0…15 intoFoo0
throughFoo15
(or maybe betterFoo(EnumWith16Variants)
) so that the compiler doesn’t have to evaluate code to figure out that it doesn’t need to NOP the rest of the jump range out, or bail out of generating a jump table, whatever it would do.But is it used in the kernel?
Longjmp? Not that the kernel uses that ofc.
Begone, vile creature, use
call/cc
if you’re into that kind of stuff. What you’re doing might be consensual but it sure ain’t safe or sane. Unless you’re implementingcall/cc
, then I forgive you, and extend my condolences.Longjmp in C is generally used for implementing exceptions or things of that sort, which is fine. Even Ada and Haskell have exceptions. C++ has them too, though maybe that doesn’t speak in their favor as much. Rust lacks them, but so far that seems to me to be a shortcoming in Rust. Even those wanting unwinding of error value checking through the entire call stack could look at the C++ deterministic exception proposal which is similar to Haskell’s ErrorT monad transformer.
This is worth reading about Haskell if such topics are of interest: https://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf
Anyway, Rust has its own sort of call/cc thing as far as I can tell (not sure), in its async runtimes. There is a coroutine switching scheme underneath that, amirite? Where do the call stacks for the different async tasks live and how are they allocated? I’ve been wondering, “Comprehensive Rust” doesn’t go into any detail about this.
Call/cc in its most general form is possibly evil, but delimited continuations, the most common use of call/cc, are perfectly cromulent as far as I know. My brain is not currently big enough to understand the topic but I’m going by some of Oleg Kiselyov’s old writings, probably linked from here: https://okmij.org/ftp/continuations/index.html
I think it’s a benefit. In Rust, you have two options:
catch_unwind()
(some types are not), and should be used rarely, this isn’t try/catchThis is good for a few reasons:
The async runtime handles that. You’d need to look at the specific implementation to see how they handle it, the standard library only provides tools for dealing with async.
My understanding is that it works kind of like a generator, so any await call yields to the runtime, which resumes another generator. That’s an over simplification, but hopefully it’s close enough.
I think this is basically just the general idea of a generator, but the caller chooses where the yield yields to.
I agree with the criticisms of call/cc, and I think it’s much clearer to just use coroutines (of which generators are a special case) and otherwise linear control flow instead of messing about with continuations. IMO, the only people that should mess with continuations are lower level engineers, like driver authors, compiler devs, and async lib devs. Regular devs will just create more problems for themselves.
async is stackless coroutines, less powerful than stackful ones and vastly less powerful than first-class continuations, which is what call/cc provides, but also way more performant as there’s basically zero memory management overhead…
That’s the neat thing: They aren’t, futures don’t contain their stack the compiler infers, statically, what they will need, and put that in a struct. As said, practically zero memory overhead.
First-class continuations aren’t evil they’re naughty, dirty only if used when not actually making anything clearer. Delimited continuations are actually a generalisation of call/cc and arguably way easier to think about, it’s just that call/cc predates their discovery and made it into standard lisp and scheme, where is that Felleisen paper, here: 1988. 13 years after scheme, 28 after lisp. longjmp/setjmp exist since at least System V and C89 according to the linux man pages, I guess they implemented it to implement lisp in. It’s just that C is about the worst language to use when you’re writing anything involving continuations, everything gets very complicated very fast, and not because continuations are involved but because you have no gc, no nothing, to support you.
Scheme has call/cc but standard Lisp (i.e. Common Lisp) doesn’t. Hmm, Rust async is like C++20 coroutines, so they can only yield from the outermost level of the task etc.? (Added: C Protothreads also come to mind). That sounds constraining even compared to the very lightweight Forth multitaskers of the 1970s. Python’s original generators were like that, but they fixed them later to be stackful coroutines. ~
And do you mean there is something like a jump table in the async task, going to every possible yield point in the task, such as any asynchronous i/o call? That could be larger than a stack frame, am I missing something?~ (No that wouldn’t be needed, oops).Setjmp/longjmp in C had some predecessor with a different name, that went all the way back to early Unix, way before C89. It was routinely used for error handling in C programs.
I’ve implemented Lisp (without call/cc) in C using longjmp to handle catch/throw. It wasn’t bad. Emacs Lisp also works like that.
I had been pretty sure that delimited continuations were strictly less powerful than first-class continuations, but I’ll look at the paper you linked.
I found some search hits saying you were supposed to implement signal handlers in Rust by setting a flag in the handler and then checking it manually all through the program. Even C lets you avoid that! It sounds painful, especially when there can be asynchronous exceptions such as arithmetic error traps (SIGFPE) that you want to treat sanely. But I haven’t looked at any Rust code that does stuff like that yet.
Hmm, I wonder if it’s feasible to use the Boehm garbage collection method in Rust, where the unsafe region is limited to the GC itself. Of course it would use pointer reversal to avoid unbounded stack growth. Of course I don’t know if it’s feasible to do that in Ada either.