Odin is great language. The creator GingerBill is an incredibly talented language designer and I would encourage anyone interested in programming languages to listen to some of the interviews he has done on various podcasts. The way he thinks about tradeoffs and problems when designing a language is exactly what is required to produce something like Odin. He has a level of craftsmanship that is rare.
A lot of it just comes down to good taste; he's upfront about the language's influences. When something is good there's no shame in taking it. e.g. The standard library package structure is very similar to Go's.
There are plenty of innovations as well. I haven't seen anything quite like the context system before, definitely not done as well as in Odin.
mrkeen 12 hours ago [-]
> In Odin all variables are automatically zero initialized. Not just integers and floats. But all structs as well. Their memory is filled with zeroes when those variables are created.
> This makes ZII extra powerful! There is little risk of variables accidentally being uninitialized.
The cure is worse than the problem. I don't want to 'safely' propagate my incorrect value throughout the program.
If we're in the business of making new languages, why not compile-time error for reading memory that hasn't been written? Even a runtime crash would be preferable.
tlb 12 hours ago [-]
Being initialized to zero is at least repeatable, so if you forget to initialize something you'll notice it immediately in testing. The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
thasso 11 hours ago [-]
> The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
This is not the whole story. You're making it sound like uninitialized variables _have_ a value but you can't be sure which one. This is not the case. Uninitialized variables don't have a value at all! [1] has a good example that shows how the intuition of "has a value but we don't know which" is wrong:
use std::mem;
fn always_returns_true(x: u8) -> bool {
x < 120 || x == 120 || x > 120
}
fn main() {
let x: u8 = unsafe { mem::MaybeUninit::uninit().assume_init() };
assert!(always_returns_true(x));
}
If you assume an uninitialized variable has a value (but you don't know which) this program should run to completion without issue. But this is not the case. From the compiler's point of view, x doesn't have a value at all and so it may choose to unconditionally return false. This is weird but it's the way things are.
It's a Rust example but the same can happen in C/C++. In [2], the compiler turned a sanitization routine in Chromium into a no-op because they had accidentally introduced UB.
The unsafe part is supposed to tell you that any assumptions you might make might not hold true.
gingerBill 11 hours ago [-]
> You're making it sound like uninitialized variables _have_ a value but you can't be sure which one.
Because that's a valid conceptualization you could have for a specific language. Your approach and the other person's approach are both valid but different, and as I said in another comment, they come with different compromises.
If you are thinking like some C programmers, then `int x;` can either have a value which is just not known at compile time, or you can think of it having a specialized value of "undefined". The compiler could work with either definition, it just happens that most compilers nowadays do for C and Rust at least use the definition you speak of, for better or for worse.
nlitened 8 hours ago [-]
> C programmers, then `int x;` can either have a value which is just not known at compile time
I am pretty sure that in C, when a program reads uninitialized variable, it is an "undefined behavior", and it is pretty much allowed to be expected to crash — for example, if the variable turned out to be on an unallocated page of stack memory.
So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
maccard 2 hours ago [-]
The problem is that “allowed to be expected to crash” is one interpretation. Another is “0 initialized” (which debug runtimes sometimes use), another is “whatever was on the stack last time” and another is “we can reorder program and eliminate what you think is logical code”.
int main() {
int val;
if (val == 3) {
cout << “here” << end;
}
return 0;
}
A perfectly legal interpretation of this program is to remove the call to cout, as is just printing “here” on every run.
> So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
There are vanishingly few platforms where the stack you have in a C program maps to physical memory (even if you consider pages from the OS)
gingerBill 7 hours ago [-]
It is "undefined behaviour" in C (which is an overloaded term which I will not discuss why I hate it in this comment). But my point was that is how many people conceptualize it, and for many things people do expect it to be one of the possible values, just not knowable ahead of time.
However, I was using that "C programmers" bit to explain the conceptualization aspect, and how it also applies to other languages. Not every language, even systems languages, have the same concepts as C, especially the same construction as "UB".
shwouchk 6 hours ago [-]
As someone who recently wondered what kinds of things might happen, im actually very glad for GPs clarification.
uecker 4 hours ago [-]
It is undefined in C for automatic variables whose address was not taken (and in this case a compiler should be able to warn).
steveklabnik 4 hours ago [-]
Interestingly enough, C++26 introduces "erroneous behavior" and uses it for uninitialized variables, rather than undefined behavior.
10 hours ago [-]
gingerBill 11 hours ago [-]
You're assuming that's the style of programming others want to program in. Some people want the "ZII" approach. Your approach is a trade-off with costs which many others would not want to make. So it's not "preferable", it's a different compromise.
iainmerrick 9 hours ago [-]
That's clearly correct, as e.g. Go uses this style and there are lots of happy Go users.
I want to push back on the idea that it's a "trade-off", though -- what are the actual advantages of the ZII approach?
If it's just more convenient because you don't have to initialize everything manually, you can get that with the strict approach too, as it's easy to opt-in to the ZII style by giving your types default initializers. But importantly, the strict approach will catch cases where there isn't a sensible default and force you to fix them.
Is it runtime efficiency? It seems to me (but maybe not to everyone) that initialization time is unlikely to be significant, and if you make the ZII style opt-in, you can still get efficiency savings when you really need them.
The explicit initialization approach seems strictly better to me.
gingerBill 9 hours ago [-]
> It seems to me... that initialization time is unlikely to be significant
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
That's what I was trying to get at by talking about making ZII opt-in. If you're using a big chunk of memory — say a matrix, or an array of matrices — it's a win if you can zero-initialize it cheaply or for free, sure. In JS, for example, you'd allocate an ArrayBuffer and use it immediately (via a TypedArray or DataView).
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
gingerBill 7 hours ago [-]
Making it opt-in, means making the hierarchical approach the default. Whatever you make "opt-in" means you are by default discouraging its use. And what you are suggesting as the default is not what I wanted from Odin (I am the creator by the way).
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
iainmerrick 6 hours ago [-]
Yeah, that's fair, clearly this sort of thing is why we have multiple languages in the first place!
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
Rusky 4 hours ago [-]
If you don't want to make it "opt-in" would it at least make sense to make it "opt-out"? Does Odin have a way for specific types to omit a zero value?
gingerBill 4 hours ago [-]
That would require having constructors, which is not something Odin will ever have nor should it. However you can just initialize with a constant or variable or just use a procedure to initialize with. Odin is a C alternative after all, so it's a fully imperative procedural language.
Rusky 1 hours ago [-]
Why would it require constructors? As opposed to simply enforcing that it always be initialized with a constant/variable/procedure/etc rather than zeroed.
igouy 4 hours ago [-]
> the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored.
Messages sent to the Smalltalk UndefinedObject instance are not silently ignored — #doesNotUnderstand.
Sometimes that run time message lookup has been used to extend behavior —
1986 "Encapsulators: A New Software Paradigm in Smalltalk-80"
I always find this opinion intriguing, where it's apparently fine that globals are initialized to zero, but you are INSANE to suggest it's the default for locals. What kind of programs are y'all writing?
Clearly the lack of zeroing in C was a trade-off at the time. Just like UB on signed overflow. And now people seem to consider them "obvious correct designs".
Tuna-Fish 8 hours ago [-]
I'd prefer proper analysis for globals too, but that is substantially harder.
"Improperly using a variable before it is initialized" is a very common class of bug, and an easy programming error to make. Zero-initializing everything does not solve it! It just converts the bugs from ones where random stack frame trash is used in lieu of the proper value into ones where zeroes are used. If you wanted a zero value, it's fine, but quite possibly you wanted something else instead and missed it because of complex initialization logic or something.
What I want is a compiler that slaps me when I forget to initialize a proper value, not one that quietly picks a magic value it thinks I might have meant.
nickpsecurity 3 hours ago [-]
It might be easier to detect a zero value. It might be easier to debug. People used to use hard-coded, human-visible values for debugging for that reason.
Tuna-Fish 3 hours ago [-]
Sure, and I'm not against belt-and-suspenders here.
It's just that "all values are defined to be zero-initialized, and you can use them as such" is a horrible decision. It means that you cannot even get best effort warnings for lack of initialization, because as far as the compiler knows you might have meant to use the zero value.
canucker2016 2 hours ago [-]
So fixing approx. 5-10% of CVEs (by zero-initializing all stack vars) is a worse cure than letting these uninitialized stack vars be possible sources of exploits?
Same. Rust's `Default` (Both derive, and custom) is my favorite way of handling a quick init, of any language. The key part is I can initialize quickly and without effort, with values that make sense based on the context.
variadix 2 hours ago [-]
This is certainly an interesting argument for making certain behavior (in this case, uninitialized access) the default and UB. There’s a similar argument for making signed overflow UB instead of defined to wrap, even if you’re only targeting two’s-complement machines, that is: leaving the behavior undefined enables analyzers to detect the behavior and making it the default can make otherwise silent errors detectable across all programs. I think I’ve come around to wanting these to be undefined and the default, it’s unintuitive but defined wrapping or zero initialized may be undesirable behaviors anyway.
jimbob45 1 hours ago [-]
The dangerous behavior should be opt-in, not opt-out. I appreciate that C gives you all of these neat footguns but they need to be hidden to find for only those who need them. Stuff like implicitness being the default for functions, int wrapping, and non-initialized variables just give rise to bugs. And for what? So first-year students can have their unoptimized code be 0.00001% faster by default? It's dumb.
dooglius 8 hours ago [-]
> why not compile-time error for reading memory that hasn't been written
A compiler doesn't have to accept all possible programs. If it can't prove that a variable is initialized before being read, then it can simply require that you explicitly initialize it.
dooglius 6 hours ago [-]
Sure, but then not accepting many programs would be the answer to parent's question "why not"
jerf 4 hours ago [-]
Not accepting many C programs, maybe. It's pretty easy to create a language where declaration is initialization of some sort, as evidenced by the large number of languages in common use where, one way or another, that's already the case.
This isn't some whacko far out idea. Most languages already today don't have any way (modulo "unsafe", or some super-carefully declared and defined method that is not the normal operation of the language) of reading uninitialized memory. It's only the residual C-likes bringing up the rear where this is even a question.
(I wouldn't count Odin's "explicitly label this as not getting initialized"; I'm talking about defaults being sharp and pointy. If a programmer explicitly asks for the sharp and pointy, then it's a valid choice to give it to them.)
dooglius 3 hours ago [-]
I think we are in agreement? Odin works the way you describe, and GP in response expressed a preference that the compiler instead fail at compile time if it detected that memory had not been explicitly initialized; my response was to explain why this is not (in the general case) feasible.
reverius42 2 hours ago [-]
It may not be feasible in the general case by changing the compiler, but it's definitely feasible in the general case by changing the language. If you can't specify an uninitialized variable syntactically then you don't have to analyze whether it exists semantically.
trealira 3 hours ago [-]
Somehow Rust is able to do it, though. Is it really that hard for compilers to do flow analysis to detect and forbid uses of uninitialized variables? Not even being sarcastic, I genuinely would like to know why more languages don't do this.
steveklabnik 53 minutes ago [-]
The flow analysis isn't particularly hard, but lots of languages simply don't do it because they don't allow uninitialized variables in the first place. Given null is a pretty common concept, you just say they're initialized but null, and you don't even need to do the analysis at all.
trealira 2 hours ago [-]
This is a self-response, but I've thought of a case where it might be fairly difficult for a compiler to prove a variable is always initialized, because of the use of pointers. Take this function to copy a linked list in C:
The variable "new_list" is always initialized, no matter what, even though it's never explicitly on the left hand side of an assignment. If the while loop never ran, then indirect is pointing to the address of new_list after the loop, and the "*indirect = NULL;" statement sets it to NULL. If the loop did run, then "new_list" is set to the result of a call to malloc. In all cases, the variable is set.
But it feels like it would be hard for something that isn't a formal proof assistant to prove this. The equivalent Rust code (unidiomatic as it would be to roll your own linked list code) would require you to set "new_list" to be None before the loop starts.
thasso 12 hours ago [-]
I agree that zero-initializing doesn't really help avoid incorrect values (which is what the author focuses on) but at least you don't have UB. This is the main selling point IMO.
yusina 11 hours ago [-]
Then why not just require explicit initialization? If "performance" is your answer then adding extra optimization capabilities to the compiler that detects 0 init would be a solution which could skip any writes if the allocator guarantees 0 initialization of allocated memory. A much safer alternative. Replacing one implicit behavior with another is hardly a huge success...
layer8 2 hours ago [-]
Operating systems usually initialize new memory pages to zero by default, for security reasons, so that a process can’t read another process’s old data. So this gives you zero-initialization “for free” in many cases. Even when the in-process allocator has to zero out a memory block upon allocation, this is generally more efficient than the corresponding custom data-type-specific default initialization.
If you have a sparse array of values (it might be structs), then you can use a zero value to mark an entry that isn’t currently in use, without the overhead of having to (re-)initialize the whole array up-front. In particular if it’s only one byte per array element that would need to be initialized as a marker, but the compiler would force you to initialize the complete array elements.
Similarly, there are often cases where a significant part of a struct typically remains set to its default values. If those are zero, which is commonly the case (or commonly can be made the case), then you can save a significant amount of extra write operations.
Furthermore, it also allows flexibility with algorithms that lazy-initialize the memory. An algorithm may be guaranteed to always end up initializing all of its memory, but the compiler would have no chance to determine this statically. So you’d have to perform a dummy initialization up-front just to silence the compiler.
90s_dev 11 hours ago [-]
I'd guess it was because 0 init is desired often enough that this is a convenient implicit default?
yusina 11 hours ago [-]
"Often enough" is what's introducing the risk for bugs here.
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
johnnyjeans 9 hours ago [-]
For the same reason you wear a seatbelt and not a 7-point crash harness.
sph 10 hours ago [-]
Please, can we stop assuming every single software has actual lives on the line? These comment threads always devolve into implicit advertisement of Rust/Ada and other super strict languages because “what about safety?!”
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
It gets really tiring.
/rant
yusina 8 hours ago [-]
I'm actually with you on the ease of use. I don't see this as the opposite to safety. To me, making it harder for me to make mistakes means it's easier to use. That is, easier to use right and harder to use wrong. I'm not a Rust or Ada advocate. I'm just saying that making it harder to make the same mistakes people have been doing for decades would be a good thing. That would contribute to ease-of-use in my book since there are fewer things you need to think about that could possibly go wrong.
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
sph 8 hours ago [-]
> Are you fine with the quality of all the software out there?
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
vacuity 7 hours ago [-]
TL;DR: I disagree, and I will say upfront that my views on software are extreme. I think quality is a glaring issue in most software.
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
You seem to use eager evaluation of usability whereas in practice most people only need lazy evaluation. We use risk assessment of going from point A to point B, two concrete points. You seem to use risk assessment equivalent to JavaScript's array.flat(Infinity).
bobbylarrybobby 6 hours ago [-]
If you zero initialize a pointer and then dereference it as if it were properly initialized, isn't that UB?
jerf 4 hours ago [-]
It is undefined behavior in C. In many languages it is defined behavior; for instance in Go, dereferencing a nil pointer explicitly panics, which is a well-defined operation. It may, of course, crash your program, and the whole topic of 'should pointers even be able to be nil?' is a valid separate other question, but given that they exist, the operation of dereferencing a nil pointer is not undefined behavior in Go.
To many people reading this this may be a "duh" but I find it is worth pointing out, because there are still some programmers who believe that C is somehow the "default" or "real" language of a computer and that everything about C is true of other languages, but that is not the case. Undefined behavior in C is undefined in C, specifically. Try to avoid taking ideas about UB out of C, and to the extent that they are related (which slowly but surely decreases over time), C++. It's the language, not the hardware that is defining UB.
gingerBill 3 hours ago [-]
This is a common thing I get annoyed with when explaining to people too about Odin. Odin also defines dereferencing `nil` as panicking (as on all systems with virtual memory, it comes for free).
C is just one language of many and you do not have to define the rules of a new language to it.
drannex 7 hours ago [-]
Not sure if anyone has mentioned it, but you can additionally disable ZII in any variable by describing the value as "---" in your declaration, useful when writing high performance code, here is an example:
number: int = ---
lblume 6 hours ago [-]
Yes, this is mentioned explicitly in the article.
melodyogonna 5 hours ago [-]
You're talking about Mojo there. Even memory allocated with UnsafePointer must be explicitly initialised before it can be written to or read from.
ratatoskrt 12 hours ago [-]
> why not compile-time error for reading memory that hasn't been written?
so... like Rust?
Timwi 11 hours ago [-]
Curiously, C# does both. It uses compile-time checks to stop you from accessing an uninitialized local and from exiting a struct constructor without initializing all fields; and yet, the CLR (the VM C# compiles to) zero-initializes everything anyway.
munificent 3 hours ago [-]
It has to because the analysis to detect that fields are initialized in the constructor body is unsound. Since you have access to `this` inside the constructor, you can call other instance methods which may access fields before they have been initialized.
Java has the same problem.
(Dart, which I work on, does not. In Dart, you really truly can't observe an instance field before it has been initialized.)
mrkeen 9 hours ago [-]
This is a pain. I recently switched from Java (and its whole Optional/null mess) to C#. I was initially impressed by its nullable checks, but then I discovered 'default'. Now I gotta check that Guids aren't 0000...? It makes me miss the Java situation.
electroly 6 hours ago [-]
You don't need the "default" keyword to run into that. A simple "new Guid()" gives you all-zeroes (try it!). Nice and foot-gunny.
neonsunset 5 hours ago [-]
Only if you go out of your way to author a method with (Guid someGuid = default) argument. I've never seen it happen with Guids, if someone gives you default(Guid) - they did it on purpose, it's no different to explicitly setting `0` to an integer-typed UserID property.
If supplying Guid is optional, you just make it Guid?.
To be fair, I don't think offering default(T) by default (ha) is the best choice for structs. In F#, you have to explicitly do `Unchecked.defaultOf` and otherwise it will just not let you have your way - it is watertight. I much prefer this approach even if it can be less convenient at times.
9 hours ago [-]
dontlaugh 11 hours ago [-]
That’s likely because p/invoke is quite common.
neonsunset 9 hours ago [-]
No, that's just the memory model of CLI and the choice made by C#. By default, it emits localsinit flag for methods which indicates that all local variables must be zero-initialized first. On top of that, you can't really access unitialized memory in C# and F# anyway unless you use unsafe. It's a memory safety choice indeed but it has nothing to do with P/Invoke.
dontlaugh 9 hours ago [-]
The main motivation to use unsafe is p/invoke.
Without unsafe, zero init is not needed.
neonsunset 9 hours ago [-]
> The main motivation to use unsafe is p/invoke.
This is opposite to the way unsafe (either syntax or known unsafe APIs) is used today.
dontlaugh 8 hours ago [-]
Explicit use of unsafe is used for things like avoiding allocation, sure.
All use of p/invoke is also unsafe though, even if the keyword isn’t used. And it’s much more common to wrap a C library than to write a buffer pool.
Much better outcomes and failure modes than RAII. IIRC, Odin mentions game programming as one of its use cases.
CyberDildonics 6 hours ago [-]
These are not very good arguments and Casey Muratori is hugely biased against RAII and C++ techniques for some reason, probably familiarity with C.
He thinks that every RAII variable is a failure point and that you only have to think about ownership if you are using RAII, so it incurs mental overhead.
The reality is that you have to understand the lifetime and ownership of your allocations no matter what. If the language does nothing for you the allocation will still have a lifetime and a place where the memory is deallocated.
He also talks about combining multiple allocations in to a single allocation that then gets split into multiple pointers, but that could easily be done in C++.
jkercher 11 hours ago [-]
When I first heard about Odin, I thought, why another C replacement?! What's wrong with rust or zig? Then, after looking into it, I had a very similar experience to the author. Someone made a language just for me! It's for people who prefer C over C++ (or write C with a C++ compiler). It has the things that a C programmer has to implement themselves like tagged unions, slices, dynamic arrays, maps, and custom allocators. While providing quality of life features like distinct typing, multiple return values, and generics. It just hits that sweet spot. Now, I'm spoiled.
lblume 6 hours ago [-]
May I ask what specifically you dislike about Rust (and Zig)? All the features you mentioned are also present in these languages. Do you care about a safety vs. simplicity of the language, or something else entirely?
sph 3 hours ago [-]
Call it a niche use-case, but every time I had the chance to evaluate Rust, I had to write a function taking a callback, sometimes across to a C library. Every time I have to deal with an Fn/FnOnce/FnMut trait signature, remember if I need to box it with dyn, mayhaps it takes a reference as argument so I also need to deal with lifetimes signatures, then remember the `for<'a>` syntax, then it blows up because I need to add `+ 'static` at the end which still makes no sense to me, then I just rage quit. I am decently handy with (unsafe) Rust, wrote a minimal OS in it, but dealing with function pointers makes me want to carve my eyes out.
C doesn’t even care. You can cast an int to a function pointer if you want.
With Odin it’s taken me like 5 minutes including reading the section of the docs for the first time.
phalanx104 2 hours ago [-]
`impl Fn/FnOnce/FnMut` don't stand for function pointers, but rather function items in Rust, and as such they are zero sized so Rust can provide optimizations regarding function items specifically at compile time.
They can decay to function pointers (`$(unsafe)? $(extern)? fn($(inp),*) $(-> $(output))?`, example: unsafe extern fn() -> i32), which you can freely cast between function pointers, `*const/mut T` pointers or `usize`s. https://doc.rust-lang.org/reference/types/function-pointer.h...
jkercher 13 minutes ago [-]
Rust and Zig are both perfectly fine languages. Odin wins on simplicity and familiarity for me. I'm most productive in C which is what I use at work. So, for me, it's a better C with some quality of life improvements.
It's not trying to be too radical, so not much to learn. The result is that I move can fast in Odin, and it is legitimately fun.
ithkuil 3 hours ago [-]
Zig is similar in spirit but I think it tapped a bit more into the "innovation budget" and thus it might not click to all
karl_zylinski 10 hours ago [-]
It's indeed some kind of sweet spot. It has those things from C I liked. And it made my favorite workflows from C into "first class citizens". Not everyone likes those workflows, but for people like me it's pretty ideal.
christophilus 10 hours ago [-]
Yep. It’s my favorite C-replacement. It compiles fast. It has all of the pieces and abstractions I care about and none of the cruft I don’t.
drannex 7 hours ago [-]
By far one of the best languages I have ever used professionally and as a hobbyist, which is why I donate every month to keep the project alive.
I am dropping the link here so for those who can, should donate, and even if you don't use it, you should consider supporting this and other similar endeavors so they can't stop the signal, and keep it going: https://github.com/sponsors/odin-lang
gethly 45 minutes ago [-]
I'm looking for that huge fight next year between Odin, Zig and Jai :)
Odin will be hopefully finally specced. Jai will be finally out in public in a stable version. And Zig will be still at 0.x, but usable.
Three will enter, but only one will be the victor.
thasso 12 hours ago [-]
You can do lot's of the same things in C too, as the author mentions, without too much pain. See for example [1] and [2] on arena allocators (which can be used exactly as the temporary allocator mentioned in the post) and on accepting that the C standard library is fundamentally broken.
From what I can tell, the only significant difference between C and Odin mentioned in the post is that Odin zero-initializes everything whereas C doesn't. This is a fundamental limitation of C but you can alleviate the pain a bit by writing better primitives for yourself. I.e., you write your own allocators and other fundamental APIs and make them zero-initialize everything.
So one of the big issues with C is really just that the standard library is terrible (or, rather, terribly dated) and that there is no drop-in replacement (like in Odin or Rust where the standard library seems well-designed). I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
The author literally says that they used to do that in C. And I've done a lot of those things in C too, it just doesn't mean that C has good defaults nor good ergonomics for many of the tasks other languages have be designed to be good with.
9dev 11 hours ago [-]
I am not a C programmer, but I have been wondering this for a long time: People have been complaining about the standard library for literal decades now. Seemingly, most people/companies write their own abstractions on top of it to ease the pain and limit exposure to the horrors lurking below.
Why has nobody come along and created an alternative standard library yet? I know this would break lots of things, but it’s not like you couldn’t transition a big ecosystem over a few decades. In the same time, entire new languages have appeared, so why is it that the C world seems to stay in a world of pain willingly?
Again, mind you, I’m watching from the outside, really just curious.
dspillett 10 hours ago [-]
> Why has nobody come along and created an alternative standard library yet?
Probably, IMO, because not enough people would agree on any particular secondary standard such that one would gain enough attention and traction¹ to be remotely considered standard. Everyone who already has they own alternatives (or just wrappers around the current stdlib) will most likely keep using them unless by happenstance the new secondary standard agrees (by definition, a standard needs to be at least somewhat opinionated) closely with their local work.
Also, maintaining a standard, and a public implementation of it, could be a faffy and thankless task. I certainly wouldn't volunteer for that!
[Though I am also an outsider on the matter, so my thoughts/opinions don't have any particular significance and in insider might come along and tell us that I'm barking up the wrong tree]
--------
[1] This sort of thing can happen, but is rare. jquery became an unofficial standard for DOM manipulation and related matters for quite a long time, to give one example - but the gulf between the standard standard (and its bad common implementations) at the time and what libraries like jquery offered was much larger than the benefits a secondary C stidlib standard might give.
gingerBill 11 hours ago [-]
Because to be _standard_, it would have to come with the compiler toolchain. And if it's scattered around on the internet, people will not use it.
I tried to create my own alternative about a decade ago which eventually influenced my other endeavours.
But another big reason is that people use C and its stdlib because that's what it is. Even if it is bad, its the "standard" and trivially available. Most code relies on it, even code that has its own standard library alternative.
HexDecOctBin 11 hours ago [-]
> Why has nobody come along and created an alternative standard library yet?
Everybody has created their own standard library. Mine has been honed over a decade, why would I use somebody else's? And since it is designed for my use cases and taste, why would anyone use mine?
yusina 11 hours ago [-]
> Why has nobody come along and created an alternative standard library yet?
Because people are so terribly opinionated that the only common denominator is that the existing thing is bad. For every detail that somebody will argue a modern version should have, there will be somebody else arguing the exact opposite. Both will be highly opinionated and for each of them there is probably some scenario in which they are right.
So, the inability of the community to agree on what "good" even means, plus the extreme heterogenity of the use cases for C is probably the answer to your question.
uecker 4 hours ago [-]
I would not agree that the ergonomics are so much better in Odin that switching to another language is worth giving up the advantages of a much larger ecosystem. For a hobby project this may not matter at all, of course.
gingerBill 3 hours ago [-]
Odin has very good FFI with its `foreign import` system, so you can still use libraries written in C, Objective-C, or any other language. And Odin does support tools like asan, tsan, etc already too. So what are the thing that you are giving up if you were using Odin instead of C?—in practice.
uecker 3 hours ago [-]
Use of libraries via FFI also has a cost in terms of ergonomics. But then, what do you give up: For C, many tools exist that support C from very basic stuff such as syntax highlighting to formal verification etc. There are plenty of C programmers, C tutorials, C books etc. There is an industry supporting tooling with many different implementations even for obscure platforms. There is an ISO standard and processes for certification. It is also basically guaranteed that C exists and will be supported in the next 50 years no matter what. Once you start using a niche language you lose a lot of this.
(But I think Odin is great!)
nickpsecurity 3 hours ago [-]
For those use cases, I've always encouraged new languages to support transiting to a verifiable subset of C or another language with such tooling. Errors detected in the other language can be corrected in the source written in the new language. The abstraction gaps must be minimized, though.
arp242 12 hours ago [-]
> I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
I suppose glib comes the closest to this? At least the closest that actually sees fairly common usage.
I never used it myself though, as most of my C has been fairly small programs and I never wanted to bother people with the extra dependency.
leecommamichael 8 hours ago [-]
Odin was made for me, also. It has been 4 years and I’m still discovering little features that give me the control and confidence I wish I’d had writing C++.
I returned to the language after a stint of work in other tech and to my utter amazement, the parametric polymorphism that was added to the language felt “right” and did not ruin the comprehensibility of the core library.
Thank you gingerBill!
jongjong 10 minutes ago [-]
This reminds me of how I wrote a simple query language which can be written inside HTML attribute tags. Its killer feature is that it doesn't need quotation marks to represent strings. It knows if something is a property/variable or a string (e.g. user input) just based on its position in the command. It achieves this by being very strict with spaces. It doesn't collapse/merge multiple spaces down to one because this can wreck edge cases where a user input string might start with a space.
I like that #soa stuff – can you make your own custom #foo thing that does other things/memory layouts etc?
jay_kyburz 10 hours ago [-]
I've been messing around with Odin and Raylib for a few weeks. I've been interested in trying Raylib for a long time, it has a huge list language bindings. I chose Odin for different reasons than I think many would. Perhaps superficial reasons.
I'm a game-play programmer and not really into memory management or complex math. I like things to be quick and easy to implement. My games are small. I have no need for custom allocators or SOA. All I want is a few thousand sprites at ~120fps. I normally just work in the browser with JS. I use Odin like it's a scripting language.
I really like the dumb stuff like... no semicolons at the end of lines, no parentheses around conditionals, the case statement doesn't need breaks, no need to write var or let, the basic iterators are nice. Having a built in vector 2 is really nice. Compiling my tiny programs is about as fast as refreshing a browser page.
I also really like C style procedural programing rather than object oriented code, but when you work in a language that most people use as OO, or the standard library is OO, your program will end up with mixed paradigms.
It's only been a few weeks, but I like Odin. It's like a statically typed and compiled scripting language.
weiwenhao 4 hours ago [-]
I don't mean to promote it because the nature programming language version 0.5 is not ready yet, but the nature programming language https://github.com/nature-lang/nature basically meets your expectations, except for the use of var to declare variables, probably because I also really like simplicity.
Here's an example of how I use the nature and raylib bindings.
That's one of the best intro pages I've seen for a language. I really like how it has tabs for different practical examples (http, generics, coroutine, etc.).
sph 2 hours ago [-]
Looks ergonomic enough at first sight. The important thing for new languages is mindshare, so keep at it, post a Show HN when you feel it’s ready and perhaps it’ll pick up steam.
(Personally I have spent my weekend evaluating C-like languages and I need a break and to reset my palate for a bit)
karl_zylinski 10 hours ago [-]
I like this aspect about Odin. It doesn't try to fundamentally solve any new problems. Instead it does many things right. So it becomes hard to say "this is why you should use Odin". It's more like, try it for yourself and see if you like it :)
codr7 9 hours ago [-]
Which parts of the C standard library has any need for allocators?
gingerBill 8 hours ago [-]
Loads of libc allocate. The trivial ones being malloc/calloc/free/strdup/etc, but many other things within it will also allocate like qsort. And that means you cannot change how those things allocate either.
uecker 4 hours ago [-]
malloc/calloc/free is the allocator, so it makes no sense to pass it an allocator to it. qsort does not allocate. I think strdup is the only other function that allocates and it is a fairly new convenience function that would not be as convenient if you had to pass an allocator.
gingerBill 3 hours ago [-]
Many implementations of qsort do allocate using malloc.
And I know malloc/free is the allocator, but you cannot override it either.
uecker 3 hours ago [-]
Can you point me to a qsort that does call malloc? This is news to me as the API is designed to not require this. There is no standard way to overwrite malloc/free (which would be a limitation when using other libraries that do not make the allocator configurable), but it is often supported (e.g. malloc_hook in GNU libc) or can be done using the linker.
TBH it was also news to me, I discovered it randomly while browsing vulnerabilities…
Printf also allocates, and a ton of other stdlib functions as well.
knowitnone 5 hours ago [-]
You're saying you like Odin because it provided this feature in stdlib but how hard would it be if C provided this? And if C provided this, you'd stay with C? So this is a failure of the C community to not evolve and improve?
karl_zylinski 3 hours ago [-]
There are many additional annoying things with C. Odin just happens to choose my preferred solutions to many of those issues. It's a lot of tiny things that are right rather than a single "killer feature". I recommend just trying it and see if it makes any sense to you.
joejoo 2 hours ago [-]
What's the vibe coding landscape look like for Odin?
spicyusername 1 hours ago [-]
What... does that question mean?
Like... how easy is it to not know how anything works and generate a "working" program, using the loosest possible definition of "working", using LLMS?
jmull 9 hours ago [-]
The author is excited that they can do all the things in Odin that they can do in C.
So it strikes me that a new language may be the wrong approach to addressing C's issues. Can they truly not be addressed with C itself?
E.g., here's a list of some commonly mentioned issues:
* standard library is godawful, and composed almost entirely of foot guns. New languages fix this by providing new standard libraries. But that can be done just as well with C.
* lack of help with safety. The solutions people put forward generally involve some combination of static analysis disallowing potentially unsafe operations, runtime checks, and provided implementations of mechanisms around potentially unsafe operations (like allocators, and slices). Is there any reason these cannot be done with C (in fact, I know they all have been done).
* lack of various modern conveniences. I think there's two aspects of this. One is aesthetics -- people can feel that C code is inelegant or ugly. Since that's purely a matter of personal taste, we have to set that aside. The other is that C can often be pretty verbose. Although the syntax is terse, its low-level nature means that, in practice, you can end up writing a relatively large number of lines of code to do fairly simple things. C alternatives tend to provide syntax conveniences that streamline common & preferred patterns. But it strikes me that an advanced enough autocomplete would provide the same convenience (albeit without the terseness). We happen to have entered the age of advanced autocomplete.
Building a new language, along with the ecosystem to support it, is a lot of fun. But it also seems like a very inefficient way to address C's issues because you have to recreate so much (including all the things about C that aren't broken), and you have to reach some critical mass of adoption/usage to become relevant and sustainable. And to be frank, it's also a pretty ineffective way to address C's issues because it doesn't actually do anything to help all the existing C code. Very few projects are in a position to be rewritten. Much better would be to have a fine-grained set of solutions that code bases could adopt incrementally according to need and opportunity
Of course, I realize all this has been happening with C all along. I'm just pointing out that seems like the right approach, while these C alternatives, while fun and exciting (as far as these things go), they are probably just sound and fury that will ultimately fade away. (In fact, it might be worse if some catch on... C and all the C code bases will still be there, we'll just have more fragmentation.)
gingerBill 9 hours ago [-]
I'm the creator of the Odin programming language and I originally tried to approach it by fixing C. And my conclusion was that C could not be fixed.
I made my own standard library to replace libc. The lack of safety is hard to do when you don't have a decent enough type system. C's lack of a proper array type is a good example of this.
Before making Odin, I tried making my own C compiler with some extensions, specifically adding proper arrays (slices) with bounds checking, and adding `defer`. This did help things a lot, but it wasn't enough. C still had fundamentally broken semantics in so many places that just "fixing" the problems of C in C was not enough.
I didn't want to make Odin initially, but it was the conclusion I had after trying to fix something that cannot be fixed.
This is (almost *) bounds safety with -fsanitize=bounds
*) with some pending some compiler improvements it will be perfect
(edit: updated godbolt link)
nasretdinov 5 hours ago [-]
I feel like Odin is the closest to "normal C", especially in its simplicity, which is often undervalued. If C was easily fixable it probably would've been done already anyway...
bondant 4 hours ago [-]
Do you know if there are some professional games that have been made with Odin and released on steam or on game consoles?
The most powerful objection to the proposition that C can be fixed is the many, many attempts lying by the side of the road.
There seems to be some sort of force in the programming language landscape that prevents a language that is too similar to another language from being able to succeed. And I don't just mean something like "Python versus Ruby", although IMHO even that was a bit of a fluke due to geography, but the general inability to create a variant of C that everybody uses.
The other problem is you still end up pushed in the direction of a new language anyhow. Let's say you create C-New and it "fixes pointers" so now they're safe. I don't care how you do that. But obviously that involves some sort of writing into C-New new guarantees that pointers take. But if you're conceiving of this as "still basically C", such that you can just call into C code, when you pass your C-New pointer into C-Old, you can no longer make those guarantees. You still basically have to treat C-Old as a remote call, just like Python or Go or Lua, and put it at arm's length.
The extent to which you can "fix C" without creating this constraint is fairly limited. It's a very well defined language at this point with extremely strong opinions.
As for "C alternatives", actually, the era of C alternatives has passed. C++, Java, Objective-C, C#, many takes on the problem, none perhaps nailing the totality of the C problem space but the union of them all pretty much does. The era we have finally, at long last, it's about time we entered is the era of programming languages that aren't even reactions to C anymore, but are just their own thing.
The process of bringing up an ecosystem that isn't C is now well-trod. It's risky, certainly, but it's been done a dozen times over. It's often the only practical way forward.
uecker 4 hours ago [-]
I agree, this is absolutely the right approach. And any suggestions to make C better are very welcome.
Fraterkes 11 hours ago [-]
[flagged]
gingerBill 11 hours ago [-]
My hobbies would not be suitable for __HackerNews__. What do you think HackerNews is for?
latexr 8 hours ago [-]
While I don’t agree with the criticism of the person you replied to, it’s worth pointing out that Hacker News is very explicitly¹ for more than computer talk.
> On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Exactly. That's what I meant with my question about what did he think HackerNews was for.
christophilus 10 hours ago [-]
Keep posting, gingerBill. I love Odin threads when they pop up here. And I love Odin. Keep up the good work.
CrimsonRain 11 hours ago [-]
For many people, hacking away is (the) hobby.
It's a sad situation when people like you pollute this field with your "computer is just a tool for me to make money" attitude.
yusina 11 hours ago [-]
As long as programmers view a program as a mechanism that manipulates bytes in flat memory, we will be stuck in a world where this kind of topic seems like a success. In that world, an object puts some structure above those memory bytes and obviously an allocator sounds like a great feature. But you'll always have those bytes in the back of your mind and will never be able to abstract things without the bytes in memory leaking through your abstractions. The author even gives an example for a pretty simple scenario in which this is painful, and that's SOA. As long as your data abstraction is fundamentally still a glorified blob of raw bytes in memory, you'll be stuck there.
Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
lynx97 10 hours ago [-]
Well, the DOD people keep finding that caring about the cache is more helpful regaring performance then the casual programmer might think. Even compiler people are thinking about ditching the classical AST for something DOD-based. I admin HPC systems as a dayjob, and I rarely see programmers aware of modern CPU design and how to structure your data such that it actually performs. I get that you'd like to add more abstractions to make programming easier, but I worry that this only adds to the (already rampant) inefficiency of most programs. The architecture is NOT irrelevant. And with every abstraction you put in, you increase the distance the programmer has from knowing how the architecture works. Maybe thats fine for Python and other high level stuff, but it is not a good idea IMO when dealing with programs with longer runtimes...
bob1029 9 hours ago [-]
> caring about the cache is more helpful regaring performance then the casual programmer might think.
Cache is easily the most important consideration if you intend to go fast. The instructions are meaningless if they or their dependencies cannot physically reach the CPU in time.
The latency difference between L1/L2 and other layers of memory is quite abrupt. Keeping workloads in cache is often as simple as managing your own threads and tightly controlling when they yield to the operating system. Most languages provide some ability to align with this, even the high level ones.
whstl 9 hours ago [-]
IMO, DOD shows that you don’t have to sacrifice developer ergonomics for performance.
ECS is vastly superior as an abstraction that pretty much everything that we had before in games. Tightly coupled inheritance chains of the 90s/2000s were minefields of bugs.
Of course perhaps not every type of app will have the same kind of goldilocks architecture, but I also doubt anyone will stumble into something like that unless they’re prioritizing it, like game programmers did.
gingerBill 9 hours ago [-]
I won't get into it too much but virtually no one needs ECS, and if you have to ask how to do it, it's not for you. There are much better ways to organize a game for most people than the highly generic relational-database-like structure that is ECS. ECS does make sense in certain contexts but most people do not need it.
But I agree that DOD in practice is not a compromise between performance and ergonomics, and Odin kind of shows how that is possible.
yusina 10 hours ago [-]
That's great! Let the compiler figure out the optimal data layout then! Of course the architecture is relevant. But does everybody need to consider L2 and L3 sizes all the time? Optimizing this is for machines, with very rare exceptions. Expecting every programmer to do optimal data placement by hand is similar to expecting every programmer to call malloc and free in the right order and the correct number of times. And we know how reliable that turned out.
gingerBill 9 hours ago [-]
The compiler cannot know the _purpose_ of your program, and thus cannot "figure out the optimal data layout". It's metaphysically not possible, let alone technically.
Not everybody needs to worry about L2 or L3 most of the time, but if you are using a systems-level programming language where it might be of a concern to you at some point, it's extremely useful to be able to have that control.
> expecting every programmer to call malloc and free in the right order
The point of custom allocators is to not need to do the `malloc`/`free` style of memory allocation, and thus reduce the problems which that causes. And even if you do still need that style, Odin and many other languages offer features such as `defer` or even the memory tracking allocator to help you find the problems. Just like what was said in the article.
munificent 2 hours ago [-]
> Let the compiler figure out the optimal data layout then!
Unfortunately, that's not possible. The optimal data layout depends on the order that data is accessed, which isn't knowable without knowing all possible ways the program could execute on all possible inputs.
lynx97 10 hours ago [-]
I am reluctant to believe compiler optimisations can do everything. Kind of reminds me of the time when people thought auto parallelisation would be a plausible thing. It never really happened, at least not in a predictably efficient way.
johnnyjeans 8 hours ago [-]
> That's great! Let the compiler figure out the optimal data layout then!
GHC, which is without a doubt the smartest compiler you can get your grubby mitts on, is still an extremely stupid git that can't be trusted to do basic optimizations. Which is exactly why it exposes so many special intrinsic functions. The "sufficiently smart compiler" myth was thoroughly discounted over 20 years ago.
gingerBill 11 hours ago [-]
> As long as programmers view a program as a mechanism that manipulates bytes in flat memory...
> Yes, it will eventually manifest in memory as bytes in some memory cell...
So people view a program how the computer actually deals with it? And how they need to optimize for since they are writing programs for that machine?
So what is an example of you abstraction that you are talking about? Is there a language that already exists that is closer to what you want? Otherwise you are talking vaguely and abstractly and it doesn't really help anyone understand your point of view.
yusina 11 hours ago [-]
Real world example. You go sit in your ICE car. You press the gas pedal and the car starts moving. And that's your mental model. Depressing pedal = car moves. You do not think "depress pedal" = "more gasoline to the engine" = stronger combustion" = "higher rpm" = "higher speed". But that's the level those C and C-like language discussions are always on. The consequence of you using this abstraction in your car is that switching to a hybrid or lately an EV is seemless for most people. Depress pedal, vehicle moves faster. Whether there is a battery involved or some hydrogen magic or an ICE is insubstantial. Most of the time. Exceptions are race track drivers. But even those drop off their kids at school during which they don't really care what's under the hood as long as "depress pedal" = "vehicle moves faster".
Intermernet 10 hours ago [-]
This may be true, but it's also false. Many regular drivers have an understanding of how the machine they're driving works. Mechanical sympathy is one of the most important things I've ever learnt. It applies to software as well. Knowing how the data structures are laid out in memory, knowing how the cache works, knowing how the compiler messes with the loops and the variables. These aren't necessarily vital information, and good compilers mean that you can comfortably ignore much of these things, but this knowledge definitely makes you a better developer. Same as knowing how the fuel injection system or the aspiration of your ICE will make you a better driver.
yusina 10 hours ago [-]
I'm totally with you that it's useful knowledge. One of the main differences between a Youtube/bootcamp trained programmer and a university-CS-educated software engineer, though either "side" has outliers too.
But there is a fine line between having general understanding of the details of what's going on inside your system and using that knowledge to do very much premature optimizations and getting stuck in a corner that is hard to get out of. Large parts of our industry are in such a corner.
It's fun to nerd out about memory allocators, but that's not contributing to overall improvements of software engineering as a craft which is still too much ad hoc hacking and hoping for the best.
pjc50 11 hours ago [-]
The perfect analogy, because sometimes people want to drive a manual car, and sometimes people aren't American and it's the default.
tough 10 hours ago [-]
PRESS PEDAL
CAR STOPS
DIDNT SHIFT UP
johnnyjeans 9 hours ago [-]
> You do not think
Actually I do, and I include the inertia and momentum of every piece of the drive-train as well, and the current location of the center of gravity. I'm thinking about all of these things through the next predicted 5 seconds or so at any given time. It comes naturally and subconsciously. To say nothing of how you really aren't going to be driving a standard transmission without that mental model.
Your analogy is appropriate for your standard American whose only experience with driving a car is the 20 minute commute to work in an automatic, and thus more like a hobbyist programmer or a sysadmin than someone whose actual craft is programming. Do you really think truckers don't know in their gut what their fuel burn rate is based on how far they've depressed the pedal?
hoseja 10 hours ago [-]
Uhhhhh that's kind of how I think about the gas pedal though. There's some lag. The engine might stall a bit if you try to accelerate uphill in a wrong way. There's ideal RPM range. Etc.
yusina 11 hours ago [-]
And you were perhaps asking about programming languages. Python does not model objects as bytes in physical memory. Functional languages normally don't. That all has consequences, some of which the "close to the metal" folks don't like. But throwing the "but performance" argument at anyhing that moves us beyond the 80s is really getting old.
gingerBill 10 hours ago [-]
Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
In your analogy, it's still extremely oversimplified because what about a manual car, of which I have only ever driven. I don't have just acceleration and break, but also a clutch. I also have to many other things too to deal with. It's no where near as simple as you are making out, and thus kind of makes your analogy useless.
yusina 10 hours ago [-]
> Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
> And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
Really? Two insults packaged into two paragraphs? Was that really necessary? It's possible to discuss technical disagreements without insulting others.
I'm doing systems-level programming every day, some of it involves C. It provides me with the perspective from which I'm expressing my views. There are other views, thankfully, and a discussion allows to highlight the differences and perhaps provide everybody with a learning opportunity. That's what I'm here for.
Obviously I saw that you asked for a language and I replied to that. I separated the concrete answer to avoid getting things mixed up with the more general point.
gingerBill 9 hours ago [-]
The insults were warranted.
Your initial comment was effectively describing object relational models for every expression, like where `a.b` is some database query across the world "shouldn't matter to you". So saying we should get away from the model of programming that reflects the underlying hardware and do something more "abstract" but not be clear on what you mean by this, this is all kind of insane.
And then the examples of languages you gave being Python (a high level interpreted language that is several orders of magnitude slower than any systems language) and "functional languages" which is still quite vague. If Python is close to what you want (and by that I mean the object-model, and not the declaration syntax), then it is not applicable to anything systems related.
> But throwing the "but performance" argument at [anything] that moves us beyond the 80s is really getting old.
And your knowledge of computers appears to be stuck in the 80s too. There is a reason people want what they want, and why the author of the article likes what Odin is offering. Systems-level programmers want the control to program effectively for the machine. And yes "performance" is actually important, and sadly most programmers don't seem to care whatsoever. There is a reason everything is a web browser now, even the Windows 11 task bar is a web browser. Everything is many many orders of magnitude slower than it needs to be, nor even would be if naively implemented. Knowing how memory is laid out, how it is allocated, how it will be affected by cache-lines, how to properly utilize SIMD, and so much more, is extremely important. None of which was even a concern in the 80s.
Perhaps they were. Don't give them, even if they are warranted. See the site guidelines for why.
gingerBill 7 hours ago [-]
Sure. What I originally wrote was not actually intended to be an insult, but I didn't mind calling it that if he took it as insulting.
It's just very weird to see be very vague when questioned, and claim things which cannot be true based on what he's stated already in this comment chain.
finnh 5 hours ago [-]
I'm with you here. The whole diatribe falls firmly under "tell me you've never needed to write performant code without telling me you've never needed to write performant code..."
card_zero 11 hours ago [-]
It's always current_year, and I like bytes, thanks.
jandrewrogers 5 hours ago [-]
In the kinds of applications that require a systems language you need to know the object layouts, it isn’t avoidable. Algorithm selection over those objects is dependent on the physical object layout and hardware architecture based on the use case. The compiler doesn’t do any of this and largely can’t because it doesn’t understand what you are trying to do. It has nothing to do with “hand-optimized assembly code”.
You are making a classic “sufficiently smart compiler” argument. These types of problems can’t be automagically solved without strong general AI inside the compiler. See also: SIMD, auto-parallelization, etc. We don’t have strong general AI, never mind inside the compiler.
Until we have such a compiler, you will be dependent on people caring a lot about physical data layout to make your software scalable and efficient.
rixed 10 hours ago [-]
Ideally, the same language would allow programmers to see things at different abstraction levels, no? Because when you are stuck with bytes and allocators and doing everything else manually, it's detious and you develop hand arthritis in your 30s. But when you have only abstractions and the performances are inacceptable because no magic happened, then it's not great either.
layer8 2 hours ago [-]
GC languages like Java, Haskell, or Lisp give you that, so what you want already exists.
StopDisinfo910 11 hours ago [-]
> Instead, data needs to be viewed more abstractly.
There is no instead here. This is not a choice that has to be made once and for all and there is no correct way to view things.
Languages exist if you want to have a very abstract view of the data you are manipulating and they come with toolchains and compilers that will turn that into low level representation.
That doesn’t preclude the interest of languages which expose this low level architecture.
yusina 11 hours ago [-]
Sure. But solving problems at the wrong level of abstraction is always doomed to fail.
StopDisinfo910 10 hours ago [-]
That would be true if it was always the wrong level of abstraction.
It's obviously not for the low level parts of the toolchain which are required to make very abstract languages work.
Philpax 11 hours ago [-]
While I agree with you to some extent - working with a higher-level language where you _don't_ have that kind of visibility is its own kind of liberating - Odin is very specifically not that kind of language, and is designed for people who want or need to operate in a machine-sympathetic fashion. I don't think that's necessary all the time, but some form of it does need to exist.
9 hours ago [-]
ulbu 10 hours ago [-]
and we should probably look at alcoholic liver disease as an expression of capitalism.
data is bytes. period. your suggestion rests on someone else seeing how it is the case and dealing with it to provide you with ways of abstraction you want. but there is an infinity of possible abstractions – while virtual memory model is a single solid ground anyone can rest upon. you’re modeling your problems on a machine – have some respect for it.
in other words – most abstractions are a front-end to operations on bytes. it’s ok to have various designs, but making lower layers inaccessible is just sad.
i say it’s the opoposite – it’s 2025, we should stop stroking the imaginaries of the 80s and return to the actual. just invest in making it as ergonomic and nimble as possible.
i find it hard understand why some programmers are so intent on hiding from the space they inhabit.
A lot of it just comes down to good taste; he's upfront about the language's influences. When something is good there's no shame in taking it. e.g. The standard library package structure is very similar to Go's.
There are plenty of innovations as well. I haven't seen anything quite like the context system before, definitely not done as well as in Odin.
> This makes ZII extra powerful! There is little risk of variables accidentally being uninitialized.
The cure is worse than the problem. I don't want to 'safely' propagate my incorrect value throughout the program.
If we're in the business of making new languages, why not compile-time error for reading memory that hasn't been written? Even a runtime crash would be preferable.
This is not the whole story. You're making it sound like uninitialized variables _have_ a value but you can't be sure which one. This is not the case. Uninitialized variables don't have a value at all! [1] has a good example that shows how the intuition of "has a value but we don't know which" is wrong:
If you assume an uninitialized variable has a value (but you don't know which) this program should run to completion without issue. But this is not the case. From the compiler's point of view, x doesn't have a value at all and so it may choose to unconditionally return false. This is weird but it's the way things are.It's a Rust example but the same can happen in C/C++. In [2], the compiler turned a sanitization routine in Chromium into a no-op because they had accidentally introduced UB.
[1]: https://www.ralfj.de/blog/2019/07/14/uninit.html
[2]: https://issuetracker.google.com/issues/42402087?pli=1
Because that's a valid conceptualization you could have for a specific language. Your approach and the other person's approach are both valid but different, and as I said in another comment, they come with different compromises.
If you are thinking like some C programmers, then `int x;` can either have a value which is just not known at compile time, or you can think of it having a specialized value of "undefined". The compiler could work with either definition, it just happens that most compilers nowadays do for C and Rust at least use the definition you speak of, for better or for worse.
I am pretty sure that in C, when a program reads uninitialized variable, it is an "undefined behavior", and it is pretty much allowed to be expected to crash — for example, if the variable turned out to be on an unallocated page of stack memory.
So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
> So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
There are vanishingly few platforms where the stack you have in a C program maps to physical memory (even if you consider pages from the OS)
However, I was using that "C programmers" bit to explain the conceptualization aspect, and how it also applies to other languages. Not every language, even systems languages, have the same concepts as C, especially the same construction as "UB".
I want to push back on the idea that it's a "trade-off", though -- what are the actual advantages of the ZII approach?
If it's just more convenient because you don't have to initialize everything manually, you can get that with the strict approach too, as it's easy to opt-in to the ZII style by giving your types default initializers. But importantly, the strict approach will catch cases where there isn't a sensible default and force you to fix them.
Is it runtime efficiency? It seems to me (but maybe not to everyone) that initialization time is unlikely to be significant, and if you make the ZII style opt-in, you can still get efficiency savings when you really need them.
The explicit initialization approach seems strictly better to me.
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
A good little video on this is from Casey Muratori, "Smart-Pointers, RAII, ZII? Becoming an N+2 programmer": https://www.youtube.com/watch?v=xt1KNDmOYqA
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
Messages sent to the Smalltalk UndefinedObject instance are not silently ignored — #doesNotUnderstand.
Sometimes that run time message lookup has been used to extend behavior —
1986 "Encapsulators: A New Software Paradigm in Smalltalk-80"
https://dl.acm.org/doi/pdf/10.1145/28697.28731
Clearly the lack of zeroing in C was a trade-off at the time. Just like UB on signed overflow. And now people seem to consider them "obvious correct designs".
"Improperly using a variable before it is initialized" is a very common class of bug, and an easy programming error to make. Zero-initializing everything does not solve it! It just converts the bugs from ones where random stack frame trash is used in lieu of the proper value into ones where zeroes are used. If you wanted a zero value, it's fine, but quite possibly you wanted something else instead and missed it because of complex initialization logic or something.
What I want is a compiler that slaps me when I forget to initialize a proper value, not one that quietly picks a magic value it thinks I might have meant.
It's just that "all values are defined to be zero-initialized, and you can use them as such" is a horrible decision. It means that you cannot even get best effort warnings for lack of initialization, because as far as the compiler knows you might have meant to use the zero value.
see https://msrc.microsoft.com/blog/2020/05/solving-uninitialize...
Initializing the stack var to zero would have helped mitigate the recently discovered problem in GTA San Andreas (the real problem is an unvalidated data file) - see https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
https://en.wikipedia.org/wiki/Rice%27s_theorem?useskin=vecto...
This isn't some whacko far out idea. Most languages already today don't have any way (modulo "unsafe", or some super-carefully declared and defined method that is not the normal operation of the language) of reading uninitialized memory. It's only the residual C-likes bringing up the rear where this is even a question.
(I wouldn't count Odin's "explicitly label this as not getting initialized"; I'm talking about defaults being sharp and pointy. If a programmer explicitly asks for the sharp and pointy, then it's a valid choice to give it to them.)
But it feels like it would be hard for something that isn't a formal proof assistant to prove this. The equivalent Rust code (unidiomatic as it would be to roll your own linked list code) would require you to set "new_list" to be None before the loop starts.
If you have a sparse array of values (it might be structs), then you can use a zero value to mark an entry that isn’t currently in use, without the overhead of having to (re-)initialize the whole array up-front. In particular if it’s only one byte per array element that would need to be initialized as a marker, but the compiler would force you to initialize the complete array elements.
Similarly, there are often cases where a significant part of a struct typically remains set to its default values. If those are zero, which is commonly the case (or commonly can be made the case), then you can save a significant amount of extra write operations.
Furthermore, it also allows flexibility with algorithms that lazy-initialize the memory. An algorithm may be guaranteed to always end up initializing all of its memory, but the compiler would have no chance to determine this statically. So you’d have to perform a dummy initialization up-front just to silence the compiler.
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
It gets really tiring.
/rant
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
[0] https://news.ycombinator.com/item?id=43971464 [1] https://blog.regehr.org/archives/861
To many people reading this this may be a "duh" but I find it is worth pointing out, because there are still some programmers who believe that C is somehow the "default" or "real" language of a computer and that everything about C is true of other languages, but that is not the case. Undefined behavior in C is undefined in C, specifically. Try to avoid taking ideas about UB out of C, and to the extent that they are related (which slowly but surely decreases over time), C++. It's the language, not the hardware that is defining UB.
C is just one language of many and you do not have to define the rules of a new language to it.
so... like Rust?
Java has the same problem.
(Dart, which I work on, does not. In Dart, you really truly can't observe an instance field before it has been initialized.)
If supplying Guid is optional, you just make it Guid?.
To be fair, I don't think offering default(T) by default (ha) is the best choice for structs. In F#, you have to explicitly do `Unchecked.defaultOf` and otherwise it will just not let you have your way - it is watertight. I much prefer this approach even if it can be less convenient at times.
Without unsafe, zero init is not needed.
This is opposite to the way unsafe (either syntax or known unsafe APIs) is used today.
All use of p/invoke is also unsafe though, even if the keyword isn’t used. And it’s much more common to wrap a C library than to write a buffer pool.
Much better outcomes and failure modes than RAII. IIRC, Odin mentions game programming as one of its use cases.
He thinks that every RAII variable is a failure point and that you only have to think about ownership if you are using RAII, so it incurs mental overhead.
The reality is that you have to understand the lifetime and ownership of your allocations no matter what. If the language does nothing for you the allocation will still have a lifetime and a place where the memory is deallocated.
He also talks about combining multiple allocations in to a single allocation that then gets split into multiple pointers, but that could easily be done in C++.
C doesn’t even care. You can cast an int to a function pointer if you want.
With Odin it’s taken me like 5 minutes including reading the section of the docs for the first time.
They can decay to function pointers (`$(unsafe)? $(extern)? fn($(inp),*) $(-> $(output))?`, example: unsafe extern fn() -> i32), which you can freely cast between function pointers, `*const/mut T` pointers or `usize`s. https://doc.rust-lang.org/reference/types/function-pointer.h...
I am dropping the link here so for those who can, should donate, and even if you don't use it, you should consider supporting this and other similar endeavors so they can't stop the signal, and keep it going: https://github.com/sponsors/odin-lang
Odin will be hopefully finally specced. Jai will be finally out in public in a stable version. And Zig will be still at 0.x, but usable.
Three will enter, but only one will be the victor.
From what I can tell, the only significant difference between C and Odin mentioned in the post is that Odin zero-initializes everything whereas C doesn't. This is a fundamental limitation of C but you can alleviate the pain a bit by writing better primitives for yourself. I.e., you write your own allocators and other fundamental APIs and make them zero-initialize everything.
So one of the big issues with C is really just that the standard library is terrible (or, rather, terribly dated) and that there is no drop-in replacement (like in Odin or Rust where the standard library seems well-designed). I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
[1]: https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
[2]: https://nullprogram.com/blog/2023/10/08/
Why has nobody come along and created an alternative standard library yet? I know this would break lots of things, but it’s not like you couldn’t transition a big ecosystem over a few decades. In the same time, entire new languages have appeared, so why is it that the C world seems to stay in a world of pain willingly?
Again, mind you, I’m watching from the outside, really just curious.
Probably, IMO, because not enough people would agree on any particular secondary standard such that one would gain enough attention and traction¹ to be remotely considered standard. Everyone who already has they own alternatives (or just wrappers around the current stdlib) will most likely keep using them unless by happenstance the new secondary standard agrees (by definition, a standard needs to be at least somewhat opinionated) closely with their local work.
Also, maintaining a standard, and a public implementation of it, could be a faffy and thankless task. I certainly wouldn't volunteer for that!
[Though I am also an outsider on the matter, so my thoughts/opinions don't have any particular significance and in insider might come along and tell us that I'm barking up the wrong tree]
--------
[1] This sort of thing can happen, but is rare. jquery became an unofficial standard for DOM manipulation and related matters for quite a long time, to give one example - but the gulf between the standard standard (and its bad common implementations) at the time and what libraries like jquery offered was much larger than the benefits a secondary C stidlib standard might give.
I tried to create my own alternative about a decade ago which eventually influenced my other endeavours.
But another big reason is that people use C and its stdlib because that's what it is. Even if it is bad, its the "standard" and trivially available. Most code relies on it, even code that has its own standard library alternative.
Everybody has created their own standard library. Mine has been honed over a decade, why would I use somebody else's? And since it is designed for my use cases and taste, why would anyone use mine?
Because people are so terribly opinionated that the only common denominator is that the existing thing is bad. For every detail that somebody will argue a modern version should have, there will be somebody else arguing the exact opposite. Both will be highly opinionated and for each of them there is probably some scenario in which they are right.
So, the inability of the community to agree on what "good" even means, plus the extreme heterogenity of the use cases for C is probably the answer to your question.
(But I think Odin is great!)
I suppose glib comes the closest to this? At least the closest that actually sees fairly common usage.
I never used it myself though, as most of my C has been fairly small programs and I never wanted to bother people with the extra dependency.
I returned to the language after a stint of work in other tech and to my utter amazement, the parametric polymorphism that was added to the language felt “right” and did not ruin the comprehensibility of the core library.
Thank you gingerBill!
I'm a game-play programmer and not really into memory management or complex math. I like things to be quick and easy to implement. My games are small. I have no need for custom allocators or SOA. All I want is a few thousand sprites at ~120fps. I normally just work in the browser with JS. I use Odin like it's a scripting language.
I really like the dumb stuff like... no semicolons at the end of lines, no parentheses around conditionals, the case statement doesn't need breaks, no need to write var or let, the basic iterators are nice. Having a built in vector 2 is really nice. Compiling my tiny programs is about as fast as refreshing a browser page.
I also really like C style procedural programing rather than object oriented code, but when you work in a language that most people use as OO, or the standard library is OO, your program will end up with mixed paradigms.
It's only been a few weeks, but I like Odin. It's like a statically typed and compiled scripting language.
Here's an example of how I use the nature and raylib bindings.
https://github.com/weiwenhao/tetris
(Personally I have spent my weekend evaluating C-like languages and I need a break and to reset my palate for a bit)
And I know malloc/free is the allocator, but you cannot override it either.
TBH it was also news to me, I discovered it randomly while browsing vulnerabilities… Printf also allocates, and a ton of other stdlib functions as well.
Like... how easy is it to not know how anything works and generate a "working" program, using the loosest possible definition of "working", using LLMS?
So it strikes me that a new language may be the wrong approach to addressing C's issues. Can they truly not be addressed with C itself?
E.g., here's a list of some commonly mentioned issues:
* standard library is godawful, and composed almost entirely of foot guns. New languages fix this by providing new standard libraries. But that can be done just as well with C.
* lack of help with safety. The solutions people put forward generally involve some combination of static analysis disallowing potentially unsafe operations, runtime checks, and provided implementations of mechanisms around potentially unsafe operations (like allocators, and slices). Is there any reason these cannot be done with C (in fact, I know they all have been done).
* lack of various modern conveniences. I think there's two aspects of this. One is aesthetics -- people can feel that C code is inelegant or ugly. Since that's purely a matter of personal taste, we have to set that aside. The other is that C can often be pretty verbose. Although the syntax is terse, its low-level nature means that, in practice, you can end up writing a relatively large number of lines of code to do fairly simple things. C alternatives tend to provide syntax conveniences that streamline common & preferred patterns. But it strikes me that an advanced enough autocomplete would provide the same convenience (albeit without the terseness). We happen to have entered the age of advanced autocomplete.
Building a new language, along with the ecosystem to support it, is a lot of fun. But it also seems like a very inefficient way to address C's issues because you have to recreate so much (including all the things about C that aren't broken), and you have to reach some critical mass of adoption/usage to become relevant and sustainable. And to be frank, it's also a pretty ineffective way to address C's issues because it doesn't actually do anything to help all the existing C code. Very few projects are in a position to be rewritten. Much better would be to have a fine-grained set of solutions that code bases could adopt incrementally according to need and opportunity
Of course, I realize all this has been happening with C all along. I'm just pointing out that seems like the right approach, while these C alternatives, while fun and exciting (as far as these things go), they are probably just sound and fury that will ultimately fade away. (In fact, it might be worse if some catch on... C and all the C code bases will still be there, we'll just have more fragmentation.)
I made my own standard library to replace libc. The lack of safety is hard to do when you don't have a decent enough type system. C's lack of a proper array type is a good example of this.
Before making Odin, I tried making my own C compiler with some extensions, specifically adding proper arrays (slices) with bounds checking, and adding `defer`. This did help things a lot, but it wasn't enough. C still had fundamentally broken semantics in so many places that just "fixing" the problems of C in C was not enough.
I didn't want to make Odin initially, but it was the conclusion I had after trying to fix something that cannot be fixed.
This is (almost *) bounds safety with -fsanitize=bounds
*) with some pending some compiler improvements it will be perfect
(edit: updated godbolt link)
It's a short game made in Odin, but I spent a lot of effort in polishing it so that it would be a pleasant little strange experience.
It was the first commercial game made in Odin.
Some games made by other people:
Solar Storm (turn-based artillery game): https://store.steampowered.com/app/2754920/Solar_Storm/
2deez (fighting game, not yet released): https://store.steampowered.com/app/3583000/2Deez/
There seems to be some sort of force in the programming language landscape that prevents a language that is too similar to another language from being able to succeed. And I don't just mean something like "Python versus Ruby", although IMHO even that was a bit of a fluke due to geography, but the general inability to create a variant of C that everybody uses.
The other problem is you still end up pushed in the direction of a new language anyhow. Let's say you create C-New and it "fixes pointers" so now they're safe. I don't care how you do that. But obviously that involves some sort of writing into C-New new guarantees that pointers take. But if you're conceiving of this as "still basically C", such that you can just call into C code, when you pass your C-New pointer into C-Old, you can no longer make those guarantees. You still basically have to treat C-Old as a remote call, just like Python or Go or Lua, and put it at arm's length.
The extent to which you can "fix C" without creating this constraint is fairly limited. It's a very well defined language at this point with extremely strong opinions.
As for "C alternatives", actually, the era of C alternatives has passed. C++, Java, Objective-C, C#, many takes on the problem, none perhaps nailing the totality of the C problem space but the union of them all pretty much does. The era we have finally, at long last, it's about time we entered is the era of programming languages that aren't even reactions to C anymore, but are just their own thing.
The process of bringing up an ecosystem that isn't C is now well-trod. It's risky, certainly, but it's been done a dozen times over. It's often the only practical way forward.
> On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
¹ https://news.ycombinator.com/newsguidelines.html
It's a sad situation when people like you pollute this field with your "computer is just a tool for me to make money" attitude.
Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
Cache is easily the most important consideration if you intend to go fast. The instructions are meaningless if they or their dependencies cannot physically reach the CPU in time.
The latency difference between L1/L2 and other layers of memory is quite abrupt. Keeping workloads in cache is often as simple as managing your own threads and tightly controlling when they yield to the operating system. Most languages provide some ability to align with this, even the high level ones.
ECS is vastly superior as an abstraction that pretty much everything that we had before in games. Tightly coupled inheritance chains of the 90s/2000s were minefields of bugs.
Of course perhaps not every type of app will have the same kind of goldilocks architecture, but I also doubt anyone will stumble into something like that unless they’re prioritizing it, like game programmers did.
But I agree that DOD in practice is not a compromise between performance and ergonomics, and Odin kind of shows how that is possible.
Not everybody needs to worry about L2 or L3 most of the time, but if you are using a systems-level programming language where it might be of a concern to you at some point, it's extremely useful to be able to have that control.
> expecting every programmer to call malloc and free in the right order
The point of custom allocators is to not need to do the `malloc`/`free` style of memory allocation, and thus reduce the problems which that causes. And even if you do still need that style, Odin and many other languages offer features such as `defer` or even the memory tracking allocator to help you find the problems. Just like what was said in the article.
Unfortunately, that's not possible. The optimal data layout depends on the order that data is accessed, which isn't knowable without knowing all possible ways the program could execute on all possible inputs.
GHC, which is without a doubt the smartest compiler you can get your grubby mitts on, is still an extremely stupid git that can't be trusted to do basic optimizations. Which is exactly why it exposes so many special intrinsic functions. The "sufficiently smart compiler" myth was thoroughly discounted over 20 years ago.
> Yes, it will eventually manifest in memory as bytes in some memory cell...
So people view a program how the computer actually deals with it? And how they need to optimize for since they are writing programs for that machine?
So what is an example of you abstraction that you are talking about? Is there a language that already exists that is closer to what you want? Otherwise you are talking vaguely and abstractly and it doesn't really help anyone understand your point of view.
But there is a fine line between having general understanding of the details of what's going on inside your system and using that knowledge to do very much premature optimizations and getting stuck in a corner that is hard to get out of. Large parts of our industry are in such a corner.
It's fun to nerd out about memory allocators, but that's not contributing to overall improvements of software engineering as a craft which is still too much ad hoc hacking and hoping for the best.
DIDNT SHIFT UP
Actually I do, and I include the inertia and momentum of every piece of the drive-train as well, and the current location of the center of gravity. I'm thinking about all of these things through the next predicted 5 seconds or so at any given time. It comes naturally and subconsciously. To say nothing of how you really aren't going to be driving a standard transmission without that mental model.
Your analogy is appropriate for your standard American whose only experience with driving a car is the 20 minute commute to work in an automatic, and thus more like a hobbyist programmer or a sysadmin than someone whose actual craft is programming. Do you really think truckers don't know in their gut what their fuel burn rate is based on how far they've depressed the pedal?
And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
In your analogy, it's still extremely oversimplified because what about a manual car, of which I have only ever driven. I don't have just acceleration and break, but also a clutch. I also have to many other things too to deal with. It's no where near as simple as you are making out, and thus kind of makes your analogy useless.
> And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
Really? Two insults packaged into two paragraphs? Was that really necessary? It's possible to discuss technical disagreements without insulting others.
I'm doing systems-level programming every day, some of it involves C. It provides me with the perspective from which I'm expressing my views. There are other views, thankfully, and a discussion allows to highlight the differences and perhaps provide everybody with a learning opportunity. That's what I'm here for.
Obviously I saw that you asked for a language and I replied to that. I separated the concrete answer to avoid getting things mixed up with the more general point.
Your initial comment was effectively describing object relational models for every expression, like where `a.b` is some database query across the world "shouldn't matter to you". So saying we should get away from the model of programming that reflects the underlying hardware and do something more "abstract" but not be clear on what you mean by this, this is all kind of insane.
And then the examples of languages you gave being Python (a high level interpreted language that is several orders of magnitude slower than any systems language) and "functional languages" which is still quite vague. If Python is close to what you want (and by that I mean the object-model, and not the declaration syntax), then it is not applicable to anything systems related.
> But throwing the "but performance" argument at [anything] that moves us beyond the 80s is really getting old.
And your knowledge of computers appears to be stuck in the 80s too. There is a reason people want what they want, and why the author of the article likes what Odin is offering. Systems-level programmers want the control to program effectively for the machine. And yes "performance" is actually important, and sadly most programmers don't seem to care whatsoever. There is a reason everything is a web browser now, even the Windows 11 task bar is a web browser. Everything is many many orders of magnitude slower than it needs to be, nor even would be if naively implemented. Knowing how memory is laid out, how it is allocated, how it will be affected by cache-lines, how to properly utilize SIMD, and so much more, is extremely important. None of which was even a concern in the 80s.
They never are.
https://news.ycombinator.com/newsguidelines.html
Perhaps they were. Don't give them, even if they are warranted. See the site guidelines for why.
It's just very weird to see be very vague when questioned, and claim things which cannot be true based on what he's stated already in this comment chain.
You are making a classic “sufficiently smart compiler” argument. These types of problems can’t be automagically solved without strong general AI inside the compiler. See also: SIMD, auto-parallelization, etc. We don’t have strong general AI, never mind inside the compiler.
Until we have such a compiler, you will be dependent on people caring a lot about physical data layout to make your software scalable and efficient.
There is no instead here. This is not a choice that has to be made once and for all and there is no correct way to view things.
Languages exist if you want to have a very abstract view of the data you are manipulating and they come with toolchains and compilers that will turn that into low level representation.
That doesn’t preclude the interest of languages which expose this low level architecture.
It's obviously not for the low level parts of the toolchain which are required to make very abstract languages work.
data is bytes. period. your suggestion rests on someone else seeing how it is the case and dealing with it to provide you with ways of abstraction you want. but there is an infinity of possible abstractions – while virtual memory model is a single solid ground anyone can rest upon. you’re modeling your problems on a machine – have some respect for it.
in other words – most abstractions are a front-end to operations on bytes. it’s ok to have various designs, but making lower layers inaccessible is just sad.
i say it’s the opoposite – it’s 2025, we should stop stroking the imaginaries of the 80s and return to the actual. just invest in making it as ergonomic and nimble as possible.
i find it hard understand why some programmers are so intent on hiding from the space they inhabit.