TheSecrecyVersusSecurityInteraction 4 - 07 Sep 2011 - Main.IanSullivan
|
|
< < |
META TOPICPARENT | name="AnarchistsAuthorsOwners" |
| > > |
META TOPICPARENT | name="ClassMaterials2009" |
| | -- JohnPowerHely - 02 Oct 2008
This is not meant as a full-blown essay on my part, but rather as a place for discussion on a topic of some concern to me. I know we have a lot of coders and tech-heads in the course, and I'd love your input on the matter. I'd encourage anyone to either leave a comment or just edit the text directly. |
|
TheSecrecyVersusSecurityInteraction 3 - 03 Oct 2008 - Main.EbenMoglen
|
|
META TOPICPARENT | name="AnarchistsAuthorsOwners" |
-- JohnPowerHely - 02 Oct 2008 | | The other great advantage to open source is that once a security vulnerability is discovered, you don't have to wait for someone else to fix it for you. You don't have to reveal anything about your use of the system to a potentially untrustworthy third party, or rely on a fix you can't verify at the level of the code.
-- AndreiVoinigescu - 03 Oct 2008 | |
> > | I think the most important advance in OS security in the last ten
years was the development for NSA of Security Enhanced (SE) Linux.
Mitre produced for NSA, under GPL, for public distribution, a
comprehensive set of modifications deeply integrating into the kernel
a set of mandatory access control, role-based access control and type
enforcement facilities. The SE Linux security enhancements are now
part of the stock kernel for Red Hat Enterprise Linux, Fedora, Ubuntu,
and several other Linux distributions. Openness made it possible for
NSA to move experimental security technology results from Fluke
straight into the production environment of servers around the world,
in a fully accountable and verifiable fashion. Nothing the monopoly
has done in partial rectification of all the insecurity, spying and
criminality it has made possible even comes close.
-- EbenMoglen - 03 Oct 2008 | | |
|
TheSecrecyVersusSecurityInteraction 2 - 03 Oct 2008 - Main.AndreiVoinigescu
|
|
META TOPICPARENT | name="AnarchistsAuthorsOwners" |
-- JohnPowerHely - 02 Oct 2008 | | I should note that the concept of hiding code from anyone is one that I am personally less than comfortable with in most circumstances. I'm a decent code-monkey but a bad debugger, and I know I'm not alone. The ability to see how others have written functions and procedures (or whatever the current proper lingo is in the OOP world) you are using is something I have an instinctive need for, both to insure myself that errors aren't caused by the parts of code that I didn't write and to find better ways to craft the code I do. Perhaps I've been sucked into the 'fear, uncertainty, and doubt' propaganda machine that Eben discusses, but while I would never use a crypographic algorithm I couldn't look at - even when its operation exceeds my feeble ability in group theory - I find something comforting about a kernel that no one could look at (or that only the designers and I could look at). So I turn to you all. Have I simply drunk the Kool-Aide, or is there some credence to my concerns? | |
> > |
| | | |
> > | In any sort of system, open, closed or hybrid, someone must expand the effort to uncover security flaws--by looking at the code if it's available or by reverse engineering and experimenting if it is not. Hackers who stand to profit from the vulnerabilities they uncover are never going to reveal those vulnerabilities, regardless of whether the OS is open or closed source.
Really smart black hat hackers might be able to spot vulnerabilities in source code that a thousand other smart programmers would miss, but so would really smart white hat hackers. Assuming that the really smart hackers don't disproportionally gravitate to the dark side of the force, the effects of increased transparency should balance out.
A company could try to keep the code secret and just hire all the really smart white hat hackers to vet their code while avoiding the pitfall of exposing the source to ne'er-do-wells, but in a competitive environment, no one company will ever be able to grab all the security experts who might (because of professional curiosity or other reasons) be willing to look at the code if the source was openly available. No company would even able to accurately identify all of them. There's a real gain here from self-selection.
The other great advantage to open source is that once a security vulnerability is discovered, you don't have to wait for someone else to fix it for you. You don't have to reveal anything about your use of the system to a potentially untrustworthy third party, or rely on a fix you can't verify at the level of the code.
-- AndreiVoinigescu - 03 Oct 2008 | | |
|
TheSecrecyVersusSecurityInteraction 1 - 02 Oct 2008 - Main.JohnPowerHely
|
|
> > |
META TOPICPARENT | name="AnarchistsAuthorsOwners" |
-- JohnPowerHely - 02 Oct 2008
This is not meant as a full-blown essay on my part, but rather as a place for discussion on a topic of some concern to me. I know we have a lot of coders and tech-heads in the course, and I'd love your input on the matter. I'd encourage anyone to either leave a comment or just edit the text directly.
Secrecy versus Security in Operating Systems
I've spent part of my life coding, troubleshooting, and working in information and personnel security in the Department of Defense, so it would be safe to say that security of information is a key issue for me whenever a discussion turns to free distribution of code, at least in terms of operating systems or security protocols. I find myself often stuck trying to decide if a completely closed or open system is best, or if the optimal solution lies somewhere in between.
I' not certain how any of you are familiar with the DoD? 's TSEC (often referred to as the Orange Book), but it (along with NSA and Naval Security Instructions) served as my bible for a good number of years. The saddest part is that there are few systems that would properly be classified even as C2, but that's a separate issue and one that's come darn close to causing me ulcers in the past. While this may seem an aside, I use it to note that real-world issues depend on the ability to secure certain information. I'm not just referring to the privacy of personal information, which will apparently get to soon, but also things as prosaic as business plans or as baroque as troop movements, contingency, invasion, and disaster plans, etc.
Many think that strong cryptography is all that is needed to protect such data, but crypto alone (and I don't care if you are talking Triple-DES, AES, RSA SecurID? quasi one-time pad, PGP, etc. - at any key strength) does nothing if people can get around it through holes in the OS. If your lock is stronger than the door, a good thief just takes the whole door off the hinges. And why bother spoofing over-the-wire communications when you can slip in and read it right of the hard drive? So here lies the dilemma - how do you best secure an OS?
Closed System OS (Think Microsoft)
In a closed system OS, the only people who know the full ins and outs of the OS's operation are the programmers who wrote it, most likely up in Redmond or Cupertino. The advantages of this model should be clear: people who want to find holes in the system have to do it the hard way - by throwing everything they can at it until they find a soft spot. But the disadvantages should also be just as clear - you typically only find a security problem after an attack is successful. What's more, the fact that it is a closed system can often lead to concerns when an OS is designed for typical users and not just for those of us who are (or need to be) overly paranoid. I can't tell you how many sleepless nights and cold-sweat wakeups were caused when people first wanted to use XP on DoD? systems - and the primary concern wasn't what you'd think. We actually had to break the PnP? features to prevent auto-recognition and accessibility of USB keys.
Fully Open Systems (Think any of the brands of Unix that Eben's been discussing - particularly Linux)
The advantages and disadvantages of an open system are, in y mind, the exact reverse of those in a closed system. You get plenty of eyeballs on the code and people tend to find a number of flaws early, often prior to attacks. But allowing every Tom, Dick, and Harry access to the manner in which the kernel operates is not always a good thing. Consider that the most dangerous security flaws are often the most subtle. A thousand good coders might look right at them and not notice, because they are often caused by the interaction of multiple API-calls. What happens when a group of coders stumbles across one of these needles in the haystack (and in anything as massive as a fully-fledged OS there will be many, many such needles) and chooses not to share with the rest of the gang? Does the fact that someone else will likely catch the flaw eventually overcome the concern that through the free distribution of the code we've made it easier for those Black Hats to find a flaw and exploit it? Especially since some flaws of this nature are so difficult to find without the code malicious individuals may never have been able to stumble upon them? I need another antacid...
Partially-Open System (the closest thing I can think of would be JWICS)
Just as some OS's have more than just system-level and user-level calls, creating more layers to the onion where certain processes are slightly more trusted than others, can we do the same with OS code distribution? If we can, should we? Would it make things any better, or as Eben commented regarding the Zune, is it impossible to be both open and closed? Could we fully release all code from the API-level upward, and have a limited distribution to security experts and trusted programmers worldwide for the deeper code? Would this just provide a false sense of higher security? Here is where I most significantly look for input from others.
Closing
I should note that the concept of hiding code from anyone is one that I am personally less than comfortable with in most circumstances. I'm a decent code-monkey but a bad debugger, and I know I'm not alone. The ability to see how others have written functions and procedures (or whatever the current proper lingo is in the OOP world) you are using is something I have an instinctive need for, both to insure myself that errors aren't caused by the parts of code that I didn't write and to find better ways to craft the code I do. Perhaps I've been sucked into the 'fear, uncertainty, and doubt' propaganda machine that Eben discusses, but while I would never use a crypographic algorithm I couldn't look at - even when its operation exceeds my feeble ability in group theory - I find something comforting about a kernel that no one could look at (or that only the designers and I could look at). So I turn to you all. Have I simply drunk the Kool-Aide, or is there some credence to my concerns?
|
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|