If you've ever done this to a C library, the first thing that you'll look at when someone else does it is not the FILE type, but how stdin, stdout, and stderr have changed.
The big breaking change is usually the historical implementation of the standard streams as addresses of elements of an array rather than as named pointers. (Plauger's example implementation had them as elements 0, 1, and 2 of a _Files[] array, for example.) It's possible to retain binary compatibility with unrecompiled code that uses the old getc/putc/feof/ferror/fclearerr/&c. macros by preserving structure layouts, but changing stdin, stdout, and stderr can make things not link.
You have this backwards. Source compatibility is not broken by such a change, with zero source changes required in order to accommodate it for almost all applications; and far from OpenBSD "never, ever, ever" maintaining binary compatibility Masahiko Yasuoka and Philip Guenther took deliberate steps in this very case to ensure as much binary compatibility as they could, retaining structure layouts as-is and retaining several symbols for library internals that macros used to reference, even those that won't be used by freshly recompiled applications.
The warning, and the bumping of several shared library major version numbers, is most definitely about the standard streams breaking binary, not source as you have it, compatibility. Any newly compiled binary that is using the C standard streams won't run on old shared libraries because of the new symbol references for __stdin, __stdout, and __stderr.
FreeBSD's implementation of FILE is a nice object-oriented structure which anyone could derive from. Super-easy to make FILE point to a memory buffer or some other user code. I used that a bunch a long time ago.
Obviously making FILE opaque completely breaks every program that used this feature, so no surprise it was reverted.
In the FreeBSD case, far from breaking "every program" it breaks very little at all. In fact it broke 1 thing at the time. Unfortunately, that 1 thing happened to be sysinstall(8).
stdin, stdout, and stderr were already pointers rather than array element addresses, and the external symbol references to __stdinp, __stdoutp, and __stderrp did not change; compiled code using the old macros continued to work as the actual structure layout was not changed; compiled code using FILE* would have continued to work as the pointer implementation didn't change; compiled C++ code with C++ function parameter overloading would have continued to link as the underlying struct type did not change; source code using the ferror_unlocked() and suchlike function-like macros would have not needed changing as there were already ferror_unlocked() and suchlike functions and those remained.
Looking at things like https://reviews.freebsd.org/D4488 from 2015 there was definitely stuff in the ports tree that would have broken back in 2008. But that won't break now should this change be made again, and that's not base.
What actually broke was libftpio, a library that was in base up until 2011, and definitely won't break now, nearly 14 years after being removed for being orphaned after sysinstall(8) itself has gone away.
BSD has had funopen(3) since 4.4, so it has an alternative. FreeBSD has implemented fopencookie(3) since v11, but FreeBSD is the BSD most willing to implement Linux interfaces for various reasons.
The MH and nmh mail clients used to directly look into FILE internals. If you look for LINUX_STDIO in this old version of the relevant file you can see the kind of ugliness that resulted:
It's basically searching an email file to find the contents of either a given header or the mail body. These days there is no need to go under the hood of libc for this (and this code got ripped out over a decade ago), but back when the mail client was running on elderly VAXen this ate up significant time. Sneaking in and reading directly from the internal stdio buffer lets you avoid copying all the data the way an fread would. The same function also used to have a bit of inline vax assembly for string searching...
The only reason this "works" is that traditionally the FILE struct is declared in a public header so libc can have some of its own functions implemented as macros for speed, and that there was not (when this hack was originally put in in the 1980s) yet much divergence in libc implementations.
Yes, it's not a good idea to do this. There are more questionable pieces in gnulib, like closing stdin/stdout/stderr (because fflush and fsync is deemed too slow, and regular close reports some errors on NFS on some systems that would otherwise go unreported).
Yes, that part of Gnulib has caused some problems previously. It is mostly used to implement <stdio_ext.h> functions on non-glibc systems. However, it is also needed for some buggy implementations of ftello, fseeko, and fflush.
> Yes, it's not a good idea to do this. There are more questionable pieces in gnulib, like closing stdin/stdout/stderr (because fflush and fsync is deemed too slow, and regular close reports some errors on NFS on some systems that would otherwise go unreported).
Hyrum's law strikes again. People cast dl_info and poke at internal bits all the time too.
glibc and others should be using kernel-style compiler-driven struct layout randomization to fight it.
The standard doesn't specify any serviceable parts, and I don't think there are any internals of the struct defined in musl libc on Linux (glibc may be a different story). However, on OpenBSD, it did seem to have some user-visible bits:
If you expose it, someone will probably sooner or later use it, but probably not in any sane / portable code. On the face of it, it doesn't seem like a consequential change, but maybe they're mopping up after some vulnerability in that one weird package that did touch this.
Historically some FILE designs exposed the structure somewhere so that some of the f* methods could be implemented as macros or inline functions (e.g., `fileno()`).
I've seen old code do this over the years. When you consider for example that snprintf() didn't used to be standardized until the late 1990s. People would mock up a fake FILE* and use fprintf.
Hyrum's Law applies: the API of any software component is the entire exposed surface, not just what you've documented. Hence, if you have FILE well-defined somewhere in a programmer-accessible header, somebody somewhere can and will poke at the internal bits in order to achieve some hack or optimization.
OTOH, when coding, I consider FILE to be effectively opaque in the sense that it probably is not portable, and that the implementers might change it at any time.
Yes, it would not be sane to depend on implementation details of something like this.
But the sad reality is that many developers (myself included earlier in my career) will do insane things to fix a critical bug or performance problem when faced with a tight deadline.
The OpenBSD answer to this is: fuck them they should've known better. The few pieces of software that do this and have an active port maintainer will get patched. The rest will stay broken until somebody cares to deal with the change.
And for everything that people say about backwards compatibility, and for all the times that something has broken on me... man, I'm still glad that there are people who will stand up and defend that attitude. It isn't breaking things if you point out where they were already broken. And moving fast is defensible when the only alternative is standing still.
One time, when we were discussing a potential internal change that threatened to break ill-behaved software, Steve Summit told a story like this:
"Once upon a time, pointers on the Macintosh had 24 bits. The upper 8 bits were reserved for flags. Apple warned developers not to look directly at the flags in the upper 8 bits, but to use the macros that were supplied as part of the API -- but third-party developers looked directly at the upper 8 bits anyway. When System 7 came out with full 32-bit pointers, a lot of old applications broke because of this!"
Of course, what he didn't mention at the time was that System 7 provided a toggle that allowed these programs to run with old-school 24-bit pointers -- the equivalent concession is something I don't think OpenBSD is willing to make.
Nevertheless, vendors can and have broken full backward compatibility in cases where the developers "should've known better". Hyrum's Law just states that there will be a few that don't get the message and will watch their software break when these changes are made...
Or functionality. Happens to me all the time I have some Java class that's marked Final, so instead of just extending the class and moving on, I have to copy/paste the entire class wholesale to accomplish my goal.
Personally I hate "nanny" languages that block you from accessing things. It's my computer, and my code, and my compiler. Please don't do things "for my own good", I can decide that for myself.
(And yes, I am aware of the argument that this lets the original programmer change the internals, in practice it's not such a big problem. Or the cure is worse than the problem - for example my copy/paste example.)
Another example is a private constant. Instead of allowing me to reference it, I have to copy it. How is that any better? If the programmer has to change how the constant works then they can do so, and at that point my code will break and I'll .... copy the constant. But until then I can just use the constant.
In addition to "some code frobs internals", non-opaque FILE also allows for compatibility with code which puts FILE into a structure, since an opaque FILE doesn't have a size.
But code outside the standard library can’t do that, can it? fopen returns a pointer to a FILE, and you can’t know how a struct FILE should be copied.
You can’t just memcpy the bits and then mix calls to fread using pointers to the old and the new FILE struct, for example. I think the standard library need not even support calls using a pointer to a FILE struct it didn’t create.
In SunOS 4.x `FILE` was not opaque, and `int fileno(FILE *)` was a macro, not a funciton, and the field of the struct that held the fd number was a `char`. Yeah, that sucked for ages, especially since it bled into the Solaris 2.x 32-bit ABI.
It was a then-important optimization to do the most common operations with macros since calling a function for every getc()/putc() would have slowed I/O down too much.
That's why there is also fgetc()/fputc() -- they're the same as getc()/putc() but they're always defined as functions so calling them generated less code at the callsite at the expense of always requiring a function call. A classic speed-vs-space tradeoff.
But, yeah, it was a mistake that it originally used a "char" to store the file descriptor. Back then it was typical to limit processes to 20 open files ( https://github.com/dspinellis/unix-history-repo/blob/Researc... ) so a "char" I'm sure felt like plenty.
In general, it is a bad practice. However, it can be useful for some low-level libraries. For example, https://github.com/fmtlib/fmt provides a type-safe replacement for `printf` that can write directly to the FILE buffer providing comparable or better performance to native stdio.
Unless struct _IO_FILE is expanded elsewhere, yes, it's opaque and can only be interacted through a pointer, as it would otherwise be an unsized object.
When we're talking about opaque it's really in relation to an individual translation unit - somewhere in the binary or its linked libraries the definition has to exist for the code that uses the opaque type.
Forgive my ignorance in this topic, but if "stdio.h" itself includes "bits/types/struct_FILE.h", is anything preventing me from accessing the individual elements of FILE as they are defined in the latter header file?
It looks like FILE is not opaque in glibc. Create a translation unit that includes <stdio.h> & declares a FILE variable and it compiles fine. For comparison, create a translation unit that declares your own struct (but does not provide a definition) and declares a variable of the same type, and you'll get a "storage size of 'x' isn't known" error when compiling.
Thanks for the explanation. In case FILE was opaque in glibc, would the same test (including <stdio.h> and declaring a variable of type FILE) also fail with the unknown storage size error? If so, would linking again some library (-l) be necessary?
So many words in the commit message and the announcement article, yet not a single mention of the rationale? I have a bad feeling about their practice.
I don't know if I agree, but this is one shining example of what makes *bsd's great, not being afraid of change. Linux should take note. So much of Windows' headaches stem from not wanting to break things, and needing to support old client code.
>FILE Encapsulation: In previous versions, the FILE type was completely defined in <stdio.h>, so it was possible for user code to reach into a FILE and muck with its internals. We have refactored the stdio library to improve encapsulation of the library implementation details. As part of this, FILE as defined in <stdio.h> is now an opaque type and its members are inaccessible from outside of the CRT itself.
Ugh, no, it should not. As a user i prefer my existing programs to keep working whenever i update my OS and as a developer i prefer to work on new code than playing nanny with existing previously working code (working code here means the code did the task it was supposed to do) because some dependency broke itself.
There isn't really much of "Linux" here - this code is in libc, so glibc, but that was built from portability, it isn't very Linux specific. Linux doesn't have an all encpmpassing community for userspace.
I see. I thought OpenBSD maintained their own downstream fork of glibc or something since the title/link are for their site/lists.
It may not be all encompassing,but I was referring to GNU/Linux. you can swap out bits and pieces, but what mainstream distros include by default, that's what I meant.
I think you are looking for the Linux Standard Base. It started out with a great idea, but the LSB grew so large most popular distros publicly stated they would no longer pursue compliance, so the effort kinda fizzled out.
To misquote the street fighter movie: OpenBSD to Linux:
"For you the day you changed your ABI was the most important day in your life, but for me? It was Tuesday"
I enjoy the dichotomy between how bad the Linux project is at changing their ABI and how good OpenBSD is at the same task.
Where for the most part Linux just decides to live with the bad ABI forever. and if they do decide it actually needs to be changed it is a multi year drama with much crying and missteps.
I mean sure, linux has additional considerations that make breaking the ABI very scary for them. the big one is the corpus of closed source software, but being a orders of magnitude bigger project and their overall looser integration does not help any.
> bad the Linux project is at changing their ABI and how good OpenBSD is at the same task
From my perspective as a user who wants to have his programs keep working whenever the OS updates and as a programmer who does not want to waste their time playing nanny with broken dependency upgrades for previously working code (working in the sense that it did what it was supposed to do), the Linux project is actually doing the thing the right way and OpenBSD the bad way. It is basically the #1 reason i never considered using OpenBSD.
Linux' stance on not breaking backwards compatibility is exactly what i want from an OS. Now if only the userspace libraries weren't so happy to break things too...
fopen would hand out a FILE* without capabilities to do anything with the resulting data structure, but libc itself could work with it. Libraries would get the same kind of memory protections processes do today.
libc allocates the FILE* from an array of them or a heap of some sort. It has a private capability on the start of the array and so can recover a full-capability pointer by offsetting its private capability by the distance encoded in the user FILE*. No actual memory access required, I'd think.
This sounds about right. Under CHERI when you're returning a pointer from a function, you can choose to limit its valid dereferencable range, I imagine all the way to 0 (i.e. it can't be dereferenced).
When the pointer is passed back into libc, libc can combine the pointer with an internal capability that has the actual size/range of the structure.
This isn't _too_ different to having libc just hand out arbitrary integers as FILE; libc has to have some way to map the 'FILE' back to the real structure.
If you've ever done this to a C library, the first thing that you'll look at when someone else does it is not the FILE type, but how stdin, stdout, and stderr have changed.
The big breaking change is usually the historical implementation of the standard streams as addresses of elements of an array rather than as named pointers. (Plauger's example implementation had them as elements 0, 1, and 2 of a _Files[] array, for example.) It's possible to retain binary compatibility with unrecompiled code that uses the old getc/putc/feof/ferror/fclearerr/&c. macros by preserving structure layouts, but changing stdin, stdout, and stderr can make things not link.
And indeed that has happened here.
openbsd has never, ever, ever, even once maintained binary compatibility; this is just a warning about source compatibility breaking
You have this backwards. Source compatibility is not broken by such a change, with zero source changes required in order to accommodate it for almost all applications; and far from OpenBSD "never, ever, ever" maintaining binary compatibility Masahiko Yasuoka and Philip Guenther took deliberate steps in this very case to ensure as much binary compatibility as they could, retaining structure layouts as-is and retaining several symbols for library internals that macros used to reference, even those that won't be used by freshly recompiled applications.
The warning, and the bumping of several shared library major version numbers, is most definitely about the standard streams breaking binary, not source as you have it, compatibility. Any newly compiled binary that is using the C standard streams won't run on old shared libraries because of the new symbol references for __stdin, __stdout, and __stderr.
I think FreeBSD tried to opaque FILE[1], but it was reverted[2] and still non-opaque in main[3].
[1]: https://github.com/freebsd/freebsd-src/commit/c17bf9a9a5a3b5...
[2]: https://github.com/freebsd/freebsd-src/commit/19e03ca8038019...
[3]: https://github.com/freebsd/freebsd-src/blob/main/include/std...
OpenBSD tends to commit to breaking changes much more aggressively than others. Something tells me they're not reverting.
I think FreeBSD is also more concerned with performance regression than OpenBSD is.
FreeBSD's implementation of FILE is a nice object-oriented structure which anyone could derive from. Super-easy to make FILE point to a memory buffer or some other user code. I used that a bunch a long time ago.
Obviously making FILE opaque completely breaks every program that used this feature, so no surprise it was reverted.
In the FreeBSD case, far from breaking "every program" it breaks very little at all. In fact it broke 1 thing at the time. Unfortunately, that 1 thing happened to be sysinstall(8).
stdin, stdout, and stderr were already pointers rather than array element addresses, and the external symbol references to __stdinp, __stdoutp, and __stderrp did not change; compiled code using the old macros continued to work as the actual structure layout was not changed; compiled code using FILE* would have continued to work as the pointer implementation didn't change; compiled C++ code with C++ function parameter overloading would have continued to link as the underlying struct type did not change; source code using the ferror_unlocked() and suchlike function-like macros would have not needed changing as there were already ferror_unlocked() and suchlike functions and those remained.
Looking at things like https://reviews.freebsd.org/D4488 from 2015 there was definitely stuff in the ports tree that would have broken back in 2008. But that won't break now should this change be made again, and that's not base.
What actually broke was libftpio, a library that was in base up until 2011, and definitely won't break now, nearly 14 years after being removed for being orphaned after sysinstall(8) itself has gone away.
* https://cgit.freebsd.org/src/commit/lib/libftpio?id=430f2c87...
fopencookie, fmemopen, you don't need transparency.
fopencookie seems glibc-specific, so unavailable on BSD.
BSD has had funopen(3) since 4.4, so it has an alternative. FreeBSD has implemented fopencookie(3) since v11, but FreeBSD is the BSD most willing to implement Linux interfaces for various reasons.
The best comments always say "why" and that's missing here.
Does anyone know why this change was done? Security reasons? Preparing for future changes?
Can someone elaborate? I always treated FILE as opaque, but never imagined people could poke into it?
The MH and nmh mail clients used to directly look into FILE internals. If you look for LINUX_STDIO in this old version of the relevant file you can see the kind of ugliness that resulted:
https://cgit.git.savannah.gnu.org/cgit/nmh.git/tree/sbr/m_ge...
It's basically searching an email file to find the contents of either a given header or the mail body. These days there is no need to go under the hood of libc for this (and this code got ripped out over a decade ago), but back when the mail client was running on elderly VAXen this ate up significant time. Sneaking in and reading directly from the internal stdio buffer lets you avoid copying all the data the way an fread would. The same function also used to have a bit of inline vax assembly for string searching...
The only reason this "works" is that traditionally the FILE struct is declared in a public header so libc can have some of its own functions implemented as macros for speed, and that there was not (when this hack was originally put in in the 1980s) yet much divergence in libc implementations.
In gnulib, there is code that patches FILE internals for various platforms to modify behavior of <stdio.h> functions, or implement new functionality.
https://cgit.git.savannah.gnu.org/cgit/gnulib.git/tree/lib/s...
Yes, it's not a good idea to do this. There are more questionable pieces in gnulib, like closing stdin/stdout/stderr (because fflush and fsync is deemed too slow, and regular close reports some errors on NFS on some systems that would otherwise go unreported).
Yes, that part of Gnulib has caused some problems previously. It is mostly used to implement <stdio_ext.h> functions on non-glibc systems. However, it is also needed for some buggy implementations of ftello, fseeko, and fflush.
P.S. Hi Florian :)
And now updated for this change.
https://git.savannah.gnu.org/cgit/gnulib.git/commit/?id=69a0...
> Yes, it's not a good idea to do this. There are more questionable pieces in gnulib, like closing stdin/stdout/stderr (because fflush and fsync is deemed too slow, and regular close reports some errors on NFS on some systems that would otherwise go unreported).
Hyrum's law strikes again. People cast dl_info and poke at internal bits all the time too.
glibc and others should be using kernel-style compiler-driven struct layout randomization to fight it.
> Hyrum's law strikes again.
Is there a name for APIs that are drawn directly from some subset of observed behaviors?
Like Crockford going, "Hey, there's a nice little data format buried in these JS objects. Schloink"
> Is there a name for APIs that are drawn directly from some subset of observed behaviors?
Desire paths. https://en.wikipedia.org/wiki/Desire_path
The standard doesn't specify any serviceable parts, and I don't think there are any internals of the struct defined in musl libc on Linux (glibc may be a different story). However, on OpenBSD, it did seem to have some user-visible bits:
https://github.com/openbsd/src/commit/b7f6c2eb760a2da367dd51...
If you expose it, someone will probably sooner or later use it, but probably not in any sane / portable code. On the face of it, it doesn't seem like a consequential change, but maybe they're mopping up after some vulnerability in that one weird package that did touch this.
Historically some FILE designs exposed the structure somewhere so that some of the f* methods could be implemented as macros or inline functions (e.g., `fileno()`).
*BSD stdio.h used to include macro versions of some stdio functions (feof, ferror, clearerr, fileno, getc, putc) so they would be inlined.
I've seen old code do this over the years. When you consider for example that snprintf() didn't used to be standardized until the late 1990s. People would mock up a fake FILE* and use fprintf.
Hyrum's Law applies: the API of any software component is the entire exposed surface, not just what you've documented. Hence, if you have FILE well-defined somewhere in a programmer-accessible header, somebody somewhere can and will poke at the internal bits in order to achieve some hack or optimization.
OTOH, yes.
OTOH, when coding, I consider FILE to be effectively opaque in the sense that it probably is not portable, and that the implementers might change it at any time.
I am reminded of this fine article by Raymond Chen, which covers a similar situation on Windows way back when: https://devblogs.microsoft.com/oldnewthing/20031015-00/?p=42...
Yes, it would not be sane to depend on implementation details of something like this.
But the sad reality is that many developers (myself included earlier in my career) will do insane things to fix a critical bug or performance problem when faced with a tight deadline.
The OpenBSD answer to this is: fuck them they should've known better. The few pieces of software that do this and have an active port maintainer will get patched. The rest will stay broken until somebody cares to deal with the change.
And for everything that people say about backwards compatibility, and for all the times that something has broken on me... man, I'm still glad that there are people who will stand up and defend that attitude. It isn't breaking things if you point out where they were already broken. And moving fast is defensible when the only alternative is standing still.
One time, when we were discussing a potential internal change that threatened to break ill-behaved software, Steve Summit told a story like this:
"Once upon a time, pointers on the Macintosh had 24 bits. The upper 8 bits were reserved for flags. Apple warned developers not to look directly at the flags in the upper 8 bits, but to use the macros that were supplied as part of the API -- but third-party developers looked directly at the upper 8 bits anyway. When System 7 came out with full 32-bit pointers, a lot of old applications broke because of this!"
Of course, what he didn't mention at the time was that System 7 provided a toggle that allowed these programs to run with old-school 24-bit pointers -- the equivalent concession is something I don't think OpenBSD is willing to make.
Nevertheless, vendors can and have broken full backward compatibility in cases where the developers "should've known better". Hyrum's Law just states that there will be a few that don't get the message and will watch their software break when these changes are made...
> to achieve some hack or optimization.
Or functionality. Happens to me all the time I have some Java class that's marked Final, so instead of just extending the class and moving on, I have to copy/paste the entire class wholesale to accomplish my goal.
Personally I hate "nanny" languages that block you from accessing things. It's my computer, and my code, and my compiler. Please don't do things "for my own good", I can decide that for myself.
(And yes, I am aware of the argument that this lets the original programmer change the internals, in practice it's not such a big problem. Or the cure is worse than the problem - for example my copy/paste example.)
Another example is a private constant. Instead of allowing me to reference it, I have to copy it. How is that any better? If the programmer has to change how the constant works then they can do so, and at that point my code will break and I'll .... copy the constant. But until then I can just use the constant.
Typical "early-in-carrier" thinking. Copying implementation is totally correct move here.
All projects mentioned should have forked stdio and added their hacks/optimisations/functionality to that.
They were just too lazy. Can't blame them though. Writing C code is torture after all. One should cut all the corners they could.
People use reflection for monkey patching and complain when using compiled languages less supportive of such approaches.
So it wouldn't surprise me, that a few folks would do some tricks with FILE internals.
I always assumed that people could poke into it, but shuddered at the thought.
In addition to "some code frobs internals", non-opaque FILE also allows for compatibility with code which puts FILE into a structure, since an opaque FILE doesn't have a size.
But code outside the standard library can’t do that, can it? fopen returns a pointer to a FILE, and you can’t know how a struct FILE should be copied.
You can’t just memcpy the bits and then mix calls to fread using pointers to the old and the new FILE struct, for example. I think the standard library need not even support calls using a pointer to a FILE struct it didn’t create.
It happened to microsoft a while ago, they changed something in FILE and something else broke, so they went for opaque FILE.
However, who should really rely on internals of FILE? Isn't this a bad practice?
In SunOS 4.x `FILE` was not opaque, and `int fileno(FILE *)` was a macro, not a funciton, and the field of the struct that held the fd number was a `char`. Yeah, that sucked for ages, especially since it bled into the Solaris 2.x 32-bit ABI.
Indeed, that was the way it originally worked in all UNIXes: https://github.com/dspinellis/unix-history-repo/blob/Researc...
It was a then-important optimization to do the most common operations with macros since calling a function for every getc()/putc() would have slowed I/O down too much.
That's why there is also fgetc()/fputc() -- they're the same as getc()/putc() but they're always defined as functions so calling them generated less code at the callsite at the expense of always requiring a function call. A classic speed-vs-space tradeoff.
But, yeah, it was a mistake that it originally used a "char" to store the file descriptor. Back then it was typical to limit processes to 20 open files ( https://github.com/dspinellis/unix-history-repo/blob/Researc... ) so a "char" I'm sure felt like plenty.
In general, it is a bad practice. However, it can be useful for some low-level libraries. For example, https://github.com/fmtlib/fmt provides a type-safe replacement for `printf` that can write directly to the FILE buffer providing comparable or better performance to native stdio.
Doesn't fwrite more or less write directly to the FILE buffer, if buffering is enabled?
I'm curious to take a closer look at fmtlib/fmt, which APIs treat FILE as non-opaque?
Edit: ah, found some of the magic, I think: https://github.com/fmtlib/fmt/blob/35dcc58263d6b55419a5932bd...
I'm curious how much speedup is gained from this.
With fwrite that would be another level of buffering in addition to FILE's buffer. If you are interested in what {fmt} is doing, a good starting point is https://github.com/fmtlib/fmt/blob/35dcc58263d6b55419a5932bd.... It is also possible to bypass stdio completely and get even faster output (https://vitaut.net/posts/2020/optimal-file-buffer-size/) and while it is great for files, it may introduce interleaving problems with things like stdout.
Out of curiosity. If we have this inside `bits/types/FILE.h` in Linux/GNU, does it mean the type is opaque?
Unless struct _IO_FILE is expanded elsewhere, yes, it's opaque and can only be interacted through a pointer, as it would otherwise be an unsized object.
The definition of struct _IO_FILE seems to be inside `bits/types/struct_FILE.h`, and goes like this:
so I suppose this means it's indeed expanded somewhere and, thus, not opaque?When we're talking about opaque it's really in relation to an individual translation unit - somewhere in the binary or its linked libraries the definition has to exist for the code that uses the opaque type.
Forgive my ignorance in this topic, but if "stdio.h" itself includes "bits/types/struct_FILE.h", is anything preventing me from accessing the individual elements of FILE as they are defined in the latter header file?
It looks like FILE is not opaque in glibc. Create a translation unit that includes <stdio.h> & declares a FILE variable and it compiles fine. For comparison, create a translation unit that declares your own struct (but does not provide a definition) and declares a variable of the same type, and you'll get a "storage size of 'x' isn't known" error when compiling.
Thanks for the explanation. In case FILE was opaque in glibc, would the same test (including <stdio.h> and declaring a variable of type FILE) also fail with the unknown storage size error? If so, would linking again some library (-l) be necessary?
So many words in the commit message and the announcement article, yet not a single mention of the rationale? I have a bad feeling about their practice.
I don't know if I agree, but this is one shining example of what makes *bsd's great, not being afraid of change. Linux should take note. So much of Windows' headaches stem from not wanting to break things, and needing to support old client code.
"Windows" did this 11 years ago:
>FILE Encapsulation: In previous versions, the FILE type was completely defined in <stdio.h>, so it was possible for user code to reach into a FILE and muck with its internals. We have refactored the stdio library to improve encapsulation of the library implementation details. As part of this, FILE as defined in <stdio.h> is now an opaque type and its members are inaccessible from outside of the CRT itself.
https://devblogs.microsoft.com/cppblog/c-runtime-crt-feature...
> Linux should take note
Ugh, no, it should not. As a user i prefer my existing programs to keep working whenever i update my OS and as a developer i prefer to work on new code than playing nanny with existing previously working code (working code here means the code did the task it was supposed to do) because some dependency broke itself.
> So much of Windows' headaches stem from not wanting to break things
Quite acceptable for not having the headache for things breaking.
There isn't really much of "Linux" here - this code is in libc, so glibc, but that was built from portability, it isn't very Linux specific. Linux doesn't have an all encpmpassing community for userspace.
I see. I thought OpenBSD maintained their own downstream fork of glibc or something since the title/link are for their site/lists.
It may not be all encompassing,but I was referring to GNU/Linux. you can swap out bits and pieces, but what mainstream distros include by default, that's what I meant.
What you are looking at is not a GNU C library at all. It is a BSD C library.
I think you are looking for the Linux Standard Base. It started out with a great idea, but the LSB grew so large most popular distros publicly stated they would no longer pursue compliance, so the effort kinda fizzled out.
Windows has kept FILE opaque for as long as I can remember. Granted, that's not very long, only 10 or so years.
To misquote the street fighter movie: OpenBSD to Linux:
"For you the day you changed your ABI was the most important day in your life, but for me? It was Tuesday"
I enjoy the dichotomy between how bad the Linux project is at changing their ABI and how good OpenBSD is at the same task.
Where for the most part Linux just decides to live with the bad ABI forever. and if they do decide it actually needs to be changed it is a multi year drama with much crying and missteps.
I mean sure, linux has additional considerations that make breaking the ABI very scary for them. the big one is the corpus of closed source software, but being a orders of magnitude bigger project and their overall looser integration does not help any.
> bad the Linux project is at changing their ABI and how good OpenBSD is at the same task
From my perspective as a user who wants to have his programs keep working whenever the OS updates and as a programmer who does not want to waste their time playing nanny with broken dependency upgrades for previously working code (working in the sense that it did what it was supposed to do), the Linux project is actually doing the thing the right way and OpenBSD the bad way. It is basically the #1 reason i never considered using OpenBSD.
Linux' stance on not breaking backwards compatibility is exactly what i want from an OS. Now if only the userspace libraries weren't so happy to break things too...
This has nothing to do with Linux-the-project. An equivalent change would be in glibc / musl / ...
I think the difference is just the amount of people using the technology.
[dead]
CHERI would defend against access to internal data structures without having to bounce between address spaces, FWIW.
Please elaborate.
fopen would hand out a FILE* without capabilities to do anything with the resulting data structure, but libc itself could work with it. Libraries would get the same kind of memory protections processes do today.
How would libc get a FILE* pointer with capabilities back from a FILE* passed by the user?
libc allocates the FILE* from an array of them or a heap of some sort. It has a private capability on the start of the array and so can recover a full-capability pointer by offsetting its private capability by the distance encoded in the user FILE*. No actual memory access required, I'd think.
See https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/202306...
This sounds about right. Under CHERI when you're returning a pointer from a function, you can choose to limit its valid dereferencable range, I imagine all the way to 0 (i.e. it can't be dereferenced).
When the pointer is passed back into libc, libc can combine the pointer with an internal capability that has the actual size/range of the structure.
This isn't _too_ different to having libc just hand out arbitrary integers as FILE; libc has to have some way to map the 'FILE' back to the real structure.