These projects are so incomprehensibly vast that no human mind can comprehend even one small isolated subset of the entire thing.
Which means - no human mind can trust them either, and no human programmer alone can conduct a security review.
Which means they should not be trusted, and should be considered insecure - unless they can be carefully isolated from the environment so that only a trusted surface is exposed.
My ideal project size: something that an average coder can read in a week or two, and come back to their (possibly anarchist) colleagues saying: “this code looks reliable and won’t be leaking buckets”.
I should make you aware of the fact that hardened, top-to-bottom formally verified OS’s exist. Some even go so far as to harden and formally verify both software and hardware.
Which means - no human mind can trust them either, and no human programmer alone can conduct a security review.
Which means they should not be trusted, and should be considered insecure - unless they can be carefully isolated from the environment so that only a trusted surface is exposed.
My ideal project size: something that an average coder can read in a week or two, and come back to their (possibly anarchist) colleagues saying: “this code looks reliable and won’t be leaking buckets”.
I should make you aware of the fact that hardened, top-to-bottom formally verified OS’s exist. Some even go so far as to harden and formally verify both software and hardware.
https://www.cs.ox.ac.uk/tom.melham/pub/Gao-2021-EFV.pdf
https://github.com/standardsemiconductor/lion
http://web1.cs.columbia.edu/~junfeng/09fa-e6998/papers/sel4.pdf
https://www.cs.cornell.edu/courses/cs6410/2013fa/slides/10-verifiable-systems.pdf
https://aerospacelab.onera.fr/sites/w3.onera.fr.aerospacelab/files/AL04-10_0.pdf
https://www.cs.utexas.edu/users/hunt/FMCAD/FMCAD12/005.pdf