Friday, May 19, 2017

ICMC17: Thomas Jefferson and Apple versus the FBI

Daniel J. Bernstein, University of Illinois at Chicago & Technische Universiteit Eindoven

Gutenberg's original printing press was based on a wine press - who knew? If you think beer or wine is dangerous, you may think the best thing to do is prohibit alcohol. In 1919, the Womens Christian Temperance Union requested that the public library to remove books and pamplets on the home production of alcohol for drinks. the librarians would not destroy the books, but did remove them from public access.

Why do censors try to ban instructions? "It might be bad if people follow these instructions" - stop people from acting on information.   We have freedom of speech, though, so we shouldn't accept this. You can try to hide the information, but they will still find it and figure it out. Censorship adds very little benefit, and often causes massive damage.

There are careful exceptions to free speech in the US- you cannot intentionally solicit criminal activity. You also cannot advocate an imminent lawless action if it's likely to produce such an action: "Let's burn down that mosque" - not protected by free speech.  You also can't make false promises (breach of contract), deceive people for profit (fraud), or make false statements that damage reputation with reckless disregard for the truth (defamation).

What about training videos? Is Ocean's Eleven a training video? What about Tom Clancy's Debt of Honor, 1994 that described something very similar to 9-11 attacks. Some people also don't want to see historical documents and books on things like Kamikaze pilots - what if terrorists act on these examples?  It turns out they will come up with it themselves, even without such inspiration.
So, the court has to look at it from the point of view - are you intentionally aiding and abetting criminal activities?

What if a terrorist stays hidden and alive in the woods by reading a book on "how to fish"? It's clearly not intended to help criminals.  That type of book is protected under free speech.

On software - it's usually (always?) something a human could do/calculate by hand, with time, but we're using the computer to help make it faster. If you hear statements from the government that is talking about restricting computers - remove the word "computer" and see if the same rationale for censoring instructions followed by people?

People are using encryption to protect files and conversations, as the FBI calls it "going dark".  So, should we be allowed to publish encryption software?  Imagine if you remove the computer from this situation

Jefferson and James Madison communicated via 'encrypted' messages, encoded. Thomas Jefferson distributed instructions that James Madison used, by hand, to encrypt private letters. No computer involved here, doing it by hand. What if they  published how to do this in a book? And then a criminal used it, and the FBI comes and says you can't publish this. Is that allowed? If the book is intended to help criminals, the government can censor.

Lawyers will claim that free speech needs a software exception. Imagine sw made to destroy navigational systems on airplanes? What if it was a book that described how to do this? The computer is irrelevant to the question. The courts should look at the intent, just like they do when you present them with a book.

According to the FBI, in 1963, the Domestic Intelligence head thought he was a Russian agent. In 1964 he won the Nobel Peace Prize. In 1964 FBI sent King an anonymous letter encouraging him to commit suicide.  In 1967, NSA also started surveillance on King.

As far back as 1977, the NSA (Joseph Meyer) threatened  organizers of a crypto conference with prosecution under export laws.

For Dan himself, he sent a crypto paper and crypto software to NSA asking them for permission to publish. NSA refused, classifying paper and software as "munitions" and subject to export control.  Though in 1995, the NSA told the courts they were trying to protect America - but papers were okay (free speech) and allowed Dan to publish the paper (but not the software).

Unfortunately for the NSA, Judge Marilyn Hall Patel disagreed with them in 1996, and agreed that software was free speech. It's just language. The court of appeals agreed in 1999.

Now back to modern day - Apple vs FBI - but imagine w/out the computer. Imagine the FBI coming to Jefferson and demand that he write a new anti-encryption instructions and falsely sign those instructions as being legitimate.  Jefferson says that the instructions are too dangerous to create. The US Supreme Court notes that freedom of speech includes "both what to say and what not to say".

Ask yourself - what is the software doing in this picture?  What if we were doing this ourselves? The courts know how to handle that and you should, too.


ICMC17: Zero Knowledge Doesn't Mean Zero Ethics

Joshua Marpet, SVP, Compliance and Managed Services CyberGRC

Zero knowledge system: A mathematical proof: zero knowledge proofs and verifiable secret sharing are vital for mutli-party secure sharing. Can be used in health care, blockchain, etc.

Can use blockchain in  healthcare to exchange information across health care networks (for example between a hospital in DC and hospital in California).

How do you know you are working with an ethical party? Is the NSA ethical? What about Geek Squad? If you are building a zero knowledge system, will you be fostering bad ethics?  For example, the blockchain for bitcoin contains child porn.

Think about free speech - you can talk about all things, but not necessarily incite behaviours. For example, you cannot shout fire in a theater.

So, you need a very clear Terms of Servie and Acceptable Use and a provisioning checkbox along the lines of "Will you be hosting illegal content?"  Yes, they can break it - but then you will not have an ethical conundrum when law enforcement asks for that users illegal data.

Now, don't be a bad provider. Don't monitor your customer's content, be inconsistent or non responsive. Respect warrants - but use reason. Something doesn't seem right? Consult your lawyer, EFF, etc.

ICMC17: Revisiting Threat Models for Cryptography

Bart Preneel, imec-COSIC KU Leuven, Belgium

Rule #1 of cryptanalysis: search for plaintext first :-)

With the Snowden documents, we learned that the NSA is foiling much of the deployed encryption - using super computers, turnkeys, backdoors, etc.

If you can't get the plain text - try just asking for the key, then you can do the decryption.  About 300,000 NSA letters for keys have  been issued since 2001.  Most come with gag orders, so it's difficult to get this information.

Yahoo fought the security letter they received. Others, like Silent Circle and Lavabit just shut down.

So, think about PFS - if someone gets one of your keys, can they get your older data as well?  You can replace RSA with DH for perfect forward secrecy.  logjam, though, was able to subvert the system by downgrading the negotiation and then read your data.

If you can't get the private key, try substituting the public key (because you have the private key for your public key!)  The most recent attack in this area was fake SSL certificates or SSL person-in-th-middle attacks.

this brought about "Let's Encrypt" that has been live since 2015.

If you can't get the key, try cryptovirology (book by Young and Yung).

Or, how about a trapdoor in your PRNG (Dual EC DRBG, in Juno's ScreenOS).

What other technology might be similarly subverted?

If you can't undermine the encryption, how about attacking the end systems?

Hardware hacking: intercepted packages are opened carfully and a "load station" implants a beacon. If you don't want your  routers to come with "extra bits", you might want to pick them up from the manufacturer (pictures shown of this  happening to Cisco routers).

There is a chip that can be installed between monitor and keyboard, can be powered up remotely by radar and then the remote attacker can read what's on your screen.

Maybe we need offense over defense?  How many 0-days do our governments have? Are they revealed to vendors? If so, when?  NSA claims that they have released more than 90% of the 0-days to vendors, but didn't say anything about how long they hold onto the attacks before doing the notification.

another good way to fight encryption - complicated standards! Does anyone really fully understand IPsec, for example. Backdoors are another way, but we should be able to see from DUAL_EC_DRBG where the backdoor was backdoored....

There are 18Billion encrypted deployed devices to protect industry - not you. Like DRM to control content.

There are 14B encryption devises to protect users, but there are issues. Look at encryption on phones - it's not end to end, so still issues. Consumers might have "encrypted harddrives", but without key management, the hard drive can just be pulled out and put into another machine and read.

There are issues with many messaging services - they back up your messages in the clear in the cloud.

Secure channels are still a challenge with lack of forward secrecy, denial of service, lack of secure routing, and lack of control over meta data (which is still data!)  TOR hides your IP address, but not your location, so it is limited.

when doing design, avoid a single point of trust that becomes a single point of failure. stop collecting massive amounts of data.

distributed systems work: Root keys of some CAs, Skype (pre-2011) and bitcoin.

We need new ways to detect fraud and abuse.  We need open source solutions, open standards, effective governance and transparency for service providers.  And finally, deploy more advanced crypto.






ICMC17: Encryption and Cybersecurity Policy Under the New Administration

Neema Singh Gulani, Legislative Counsel (Privacy and Technology), ACLU

We still don't know what the policies are going to be, yet, but she's here to give us her understanding of where we are and where she thinks we're going.

Why should you care, if you're not a lawyer? Look at lavabit - a company that offered an encrypted email service. All was well and good until it was discovered that Edward Snowden used their service. The US Government requested their encryption keys (under a gag order, so they could not tell their users). Judge ordered them to give up their keys. Not just the keys that protected Snowden's mail, but to everyone's. The company shut down, because they no longer felt they could protect their users.

Right now we are seeing a very divided government, polar opposites on a lot of issues - but they will work together on preventing NSA surveillance and protecting encryption keys.

Obama administration considered various technical options to get around the "going dark" problem - so law enforcement could access information they had before encryption became more pervasive. Several things like backdoors, remote access, forced updates, etc - and the administration decided to work with the commercial providers of the products, as opposed to building legislation.

We don't know clearly where the Trump administration stands, we know that Trump was critical about Apple not wanting to give a back door to law enforcement to get into an iPhone. Jeff Sessions noted once that he was in favor of encryption, but that criminal investigators need to be able to"overcome" encryption.

There is proposed legislation from Burr/Feinstein that requires manufactures provide data in "intelligible forms" (covers software and device manufacturers.).  The ACLU is not in favor of this bill. It has been called  "technically tone deaf".

We know that under the Obama Administration had an interagency process run out of the White HOuse that didn't have any "hard and fast rules" on vulnerability disclosure.  It was used to balance risk with intelligence needs.   NSA said most vulnerabilities are disclosed.  Is that good enough to protect users of tech?

NSA surveillance: Section 702. This targets 106,000 foreign targets where they collect over 250 million internet transactions annually, about 50% of that information is about a U.S. resident. This is up for review again this year in congress.

Because of Trump's accusations of wire tapping, this may be an opportunity to reform Section 702.

Right now, based on a 6th circuit court decision from 2010, most US companies require a warrant before they will provide content to the FBI or other law enforcement.

An email privacy act was passed by congress 419-0, but the bill got stuck in the senate where too many unrelated thongs were added.

Many users would be surprised at the low bar required to hand over their data to the FBI or local law enforcement, or that they also would not necessarily be notified when it happens.

If you're building products for the government, think about how the product will be used and can you audit that it's being used as intended?

Look to see what lobbyists your employer is backing and see if it lines up with their public press releases - if not, say something.  Consider also direct lobbying - there is a dearth of technical knowledge in Capital Hill - they need your knowledge!



ICMC17: Crypto: You're Doing it Wrong

Jon Green, Sr. Director, Security Architecture and Federal CTO, Aruba Networks/HPE

Flaws can be varied and sad - like forgetting to use crypto (like calling a function that was never completed for your DRBG! Jon showed us an example of validated code that was an empty function who's comment contained TODO). Other issues in large multi-module products that may contain code written in C, Java, Python, PhP, JavaScript, Go, Bash... claiming to get FIPS from OpenSSL. Most of languages aren't going to be using OpenSSL, so they won't be using FIPS validated crypto.

Developers often don't know where the crypto is happening. They may forget to complete certain code segments, relay on 3rd party and open source and rely on multiple frameworks. Or even if they do, they may not want to dive in because of the amount of work required to make things work correctly, particularly from a  FIPS perspective..

What about the FIPS code review done by the lab? Almost certainly not, as they are typically looking at the application code - just the low level crypto and RNG functions. Even with the old German scheme for EAL4 deeper code review, still miss issues (like the above TODO code that went through EAL2 and EAL4 review).  Testing misses these nuances as well.

Security audits of your code are very fruitful, but very expensive. He's seen success with bug bounty programs, even if the code is closed.

He's also seen problems with FIPS deployments that are leveraging "FIPS inside" where they leverage another module, like OpenSSL, but forgot to turn on FIPS mode and forgot to update the applications so they would not try to use non-FIPS algorithms.

Another problematic approach - the dev follows all the steps to deploy CenOS in FIPS mode by following the RedHat documentation... except that documentation only applies to RedHat and *not* CentOS. Yes, it's the same source code , but validations are not transitive. A RedHat validation only applies to RedHat deployments.

To get this right, identify services that really need to be FIPS validated and focus your efforts there..


ICMC17: Keynote: From Heartbleed to Juniper and Beyond


Matthew Green, Johns Hopkins University.

Kleptography - the study of stealing cryptographic secrets. Most people did not think the government was really doing this. But, we do know there was a company, Crypto AG, that worked with the NSA on their cipher machines available between 1950s and 1980s.

Snowden's leak contained documents referring to SIGINT - a plan to insert back doors into commercial encryption devises and add vulnerabilities into standards.

How can the government do this?  We can't really change existing protocols, but you can mandate use of specific algorithms. This brings us to the 'Achilles heel' of crypto - (P)RNG. If that's broken, everything is broken.

There are two ways to subvert an RNG - attack the lower level TRNG or the PRNG. The TRNG is probabilistic and hardware specific and too much variance. The PRNG/DRBG is software and everyone has to use a specific one to comply with US Government standards - a more appealing target.

Young and Yung predicted an attack against the DRBG and how it might work in the 1990s - where you could use internal state, a master key and a trap door will let you decrypt the data.  This sounds a lot like Dual EC DRBG. It was almost immediately identified as having this weakness - if the values were not chosen correctly. NSA chose the values - and we trusted them.

Snowden leaks found documents referring to "challenge of finesse" regarding pushing this backdoor through a standards body.  Most of us don't have to worry about the government snooping on us, but ... what if someone else could leverage this back door?

This is what happened when Juniper discovered unauthorized code in their ScreenOS that allowed an attacker to passively decrypt VPN traffic.  Analysis of the code changes discovered that Juniper changed the parameters for DUAL EC DRBG. But, Juniper said they didn't use Dual EC DRBG, according to their security policy, other than as input into 3DES (which should cover anything bad from DRBG).  The problem came up when a global variable was used in a for loop (bad idea), which in effect means the for loop that was supposed to do the 3DES mixing never runs (as the Dual EC DRBG subroutine also uses the same global variable).

More specifically, there are issues with how IKE was implemented.  The impacted version, ScreenOS 6.2 (the version that adds Dual EC DRBG) adds a pre-generation nonce.

Timeline: 1996 Young and Yung propose the attack, 1998 Dual EC DRBG developed at NSA, 2007 became a final NISt standard, 2008 Juniper added Dual EC DRBG and it was exploited in 2012 and not discovered until 2015.

Before Dual EC DRBG, people used ANSI X9.31 - which had a problem if you used a fixed K, someone can recover the state and subvert the systems.

How do we protect ourselves? We should build protocols that are more resilient to bad RNG (though that's not what is happening). But maybe protocols are not the issue - maybe it's how we're doing the validation.  Look at FortiOS, who had a hard coded key in their devices, which was used for testing FIPS requirements, and documented it in their security policy.

Thursday, May 18, 2017

ICMC17: Evolving Practice in TLS, VPNs, and Secrets Management

Kenneth White (@KennWhite)

A good quote starts: "There is no difference, from the attacker's point of view, between a gross and tiny errors. Both of them are equally exploitable."..."This lesson is very hard to internalize. In the real world, if you build a bookshelf and forget to tighten one of the screws all the way, it does not burn down your house".

We look for the following in network transport encryption: data exposure, network intercept, credential theft, identity theft, authenticated cipher suites, etc.

We have learned, the hard way, the problem with unauthenticated block modes. If you don't compute the hash correctly or in the wrong order - it's useless.

After POODLE, SSLv3 is dead. It's still out there, but as a practical matter, it's gone.

Getting good data on who is impacted by a security vulnerability is hard - even Gartner got this wrong, by overestimating who was impacted by FREAK just by how many devices still supported SSLv3 (even if they did not actually have the vuln).

Advice going forward: use AEAD!




ICMC17: Crypto++: Past Validations and Future Directions

Jeffrey Walton, Security Consultant.

This is an older toolkit, Jeff fell in love with it when he was in college in the 90s. He's been working in computer security ever since.

Crypto++ is a C++ class library, written by Wei Dai in June 1995. It's a general purpose crypto library, handed over to the community in 2015. https://www.cyrptopp.com/

When the library was hit with CVE in 2015, he handed it over to the community to develop. Since Jeffrey has been using it since 1990s, he was chosen to become one of the maintainers. Wei Dei still advises.

C++03 through C++17 have heavy use of templates and static polymorphism (yay C++). Makes things faster, but makes it hard to adopt, especially as there is not excellent documentation. He tries to use questions on stackoverflow to demonstrate where to spend time on the documentation.

Right now, crypto++ is on the historical validation list, which makes it pretty much useless... (them and everyone else ended up on the list last year, due to DRBG changes).

Crypto++ validations are on Windows only. Includes NIST approved algorithms; RNG, AES, SHA, MAC, RSA, DH. There are non-FIPS routines in other DLLs.

Going forward, he'd like to add C bindings. Would like to add an engine-like interface.  Will they do another validation? Probably not - too expensive. But, could wrap around other validated crypto to take advantage.

Crypto++ now uses OpenSSL's FIPS Object Module, to effectively provide a FIPS validated module - so you can stay on your C++ bindings and not make changes to your application.

Going forward, he

ICMC17: Penetration Testing: TLS 1.2 and Initial Research on How to Attack TLS 1.3 Stacks

Scapy TLS: A scriptable TLS stack, Alex Moneger, Citrix Systems

TLS is the protocol that secures the internet, and there are very few alternatives. It's a session layer protocol for other protocols, and it is very complex. Sure, you can implement it in 3 weeks - but will you get it right?

TLS is under scrutiny and there is growth in the number of attacks and their frequency.

 We need to make sure we understand the attack properly and understand the practical impact. How reproducible is the attack? how can we fix it and make sure it stays fixed? Customers often don't understand the impact or how to fix.

Scapy TLS is TLS & DTLS scriptable stack built above scapy. It's as stateless as possible, includes packet crafting and dissecting and crypto session handling.

The goals of the project are to make sure it's easy to install and use to simplify discovery and exploitation of TLS vulnerabilities - very customizable.

We then got to see some code - it looks very simple to use.

The theory here is you can use this tool to help work on PoCs faster. It's on GitHub :)

ICMC17: What's new in TLS 1.3 (and OpenSSL as a result)

Rich Salz, Akami, OpenSSL Developer

TLS 1.0 was a slight modification of the original SSL protocol by Netscape, published in January 1999. Basically the same as SSL3 - it is bad, it is weak, it has no good ciphers and it's still in wide use.

TLS 1.1 came out in April 2006. It would be great to kill both TLS 1.0 and 1.1 off - but it's just too hard.
TLS 1.2 has been around since August 2008. Since then, there's been a bunch of new RFCs and algorithms from all over the world.

TLS 1.3 was approved in October 2015 in IETF.

Background on IETF: If it doesn't happen over email where everyone can see it, it didn't happen.  It is divided into areas like Security, Operations, DNS, etc - each area has area directors and working groups and working group chairs.  Very well laid out process, very egalitarian.

The working groups work by consensus - they demonstrate by humming, so people can't be singled out by raising hands or doing a verbal roll call. Many documents now do their iterations in GitHub.

IETF also has RFC editors who make sure final revisions follow the consistent and correct format for an RFC.

TLS 1.3 had a few goals - encrypt as much of the handshake as possible, reduce the handshake latency - ideally to one or zero roundtrip for repeated handshakes, update record payload and make it more privacy friendly.

Only uses ECDH and DHE (but nobody will use DHE - too expensive). All connections have perfect forward secrecy.  Most things are encrypted (SNI is not). The most common curves will be NIST P-256 or X25519.

There will be improvements to bulk encryption. Three cipher types and two variations: ChaCha220/Poly1305, AESGCM128(256) and AESCCM 128 (256). The cipher lists and choices were just getting too long and mistakes were being made. these are the ones you need.  The cipher suite no longer specifies key exchange or authentication mechanism - they are negotiated now.

The bulk encryption ciphers chose are all modern, secure and AEAD-only - in addition to confidentiality, you also get integrity (authenticity).

Authentication improvements: DSA was removed, RSA was kept and prefers RSA-PSS over PKCS #1.5.

There was general cleanup, like the removal of renegotiation (ie export crypto). Almost every message now has extensions: OCSP or Certificate Transparency stapling on server and client side. There is now padding at the TLS layer (this was slightly controversial). There is only one hash mechanism: HKDF (RFC 5869) and it's used consistently (and correctly). They've done cryptanalysis and attack events, found issues and fixed them. There will be no SHA.  Does that mean SHA3 is no good? No, there were just concerns about performance and newness when TLS 1.3 work started.

SHA will still be around in certificates, though even SHA1 is being rapidly phased out.

Renegotiation is gone. There were many use cases: request client cert, "step-up" crypto algs and re-key. It was buggy and unclean and a source of problems. Only strong crypto is available.

There is allowance for session resumption. It prefers sessions over tickets. Sever can send session information at any time and the session is like PSK. PSK is like session resumption. If you're reconnecting, send data with the resumption :-)  This gets you O-RTT - Zero Round Trip Tickets.

0-RTT: client connects, C&S do the ECDHE dance. Client remembers the server's key share. Next time, client reconnects and sends data encrypted with that key.  This helps to avoid an extra round-trip with less latency.  This makes your web faster and then we can all make more money.

0-RTT has a big butt... there is no PFS with early data. That data can be replayed elsewhere (GEt is idempotent). Nobody was really listening to these concerns until Colm@Amazon spoke and posted two weeks ago. No everybody's trying to figure out what to do. We want to do the right thing, but one of the primary goals was to improve performance - and browser vendors will use it anyways.

TLS 1.3 status is very close to IESG review. Chrome and FF and some others are at Draft-18. Draft 19 had minor changes, Draft 20 is incompatible.



ICMC17: Control Your Cloud: BYOK is Good, But Not Enough

Matt Landrock, CEO, Cryptomathic

BYOK suggests a one-way mechanism: your key, my cloud.

The word "Key" tends to be generally understood in a very broad sense (symmetric and asymmetric),  however in cloud service providers it has a bit of a different definition.

The current key management servifes offered are MS Azure, Amazon AWS and Google Cloud Platform. Azure uses Thales and Amazon uses Gemalto, Google doesn't appear to use an HSM at this time. Their biggest differences are around the BYOK protocols (key wrapping, etc).

BYOK is an important tool, but should not be the only tool in your tool box.  It will help you get your own key into the cloud, so you know it meets your standards for generation. The cloud provider will handle things for you, but not in a consistent way - so lots of hurdles to go through to get this done.

MYOK - Manage Your Own Keys! MYOK implies you can manage import/export, lifecycle and generation all by yourself. Provision when you need them, destroy when you're done.  How can you do this in a way that is meaningful for your business?

Centralized key managers are moving into the market space.

Key management is more than just keys - name, algorithm, length, export settings. And, many key formats end up being very vendor specific (some use standards like PKCS#8, but many more are just proprietary).


ICMC17: TLS Panel Discussion

Tim Hudson - Cryptsoft
Steve Marquess - OpenSSL
David Hook - Bouincy Castle
Kenn White - Open Crypto Audit
Nicko van Someren - Linux Foundation

There are many implementations of TLS, from C and assembly implementations to Java implementations. OpenSSL has many forks - some obvious, some hidden under the covers. More implementations are good, as we don't want to suffer from monoculture.

There is possible value in creating a drop in API that multiple implementations could use. Nicko suggests creating an open process to create that API - it would not necessarily be from an existing implementation. It could possibly allow for more automation. APIs tend to grow out of necessity, and are not always pretty. A common API could help with security and fuzz testing.

There was then a long side discussion on libsodium (NACL), which is only a crypto library at this point as the "N" piece of networking/TLS hasn't been implemented, yet, but there are lots of language bindings out there.

How long does it take to create a new TLS library?  Apparently a new one was created recently in 3 weeks - leveraging someone else's crypto.  Language choice is important.

The older APIs have to consider legacy deployments, for example OpenSSL still supports VAX/VMS.  Another perspective - a new TLS implementation has taken another team 5 months already. When you have a user base you need to support, that seems to add time to the implementation.

There was a question about moving your implementation from TlS 1.0 to TLS 1.3? One of the OpenSSL developers noted that there is a lot of code reuse, but also a lot of #ifdefs. (TLS 1.0 is not, yet, #ifdef''d out of OpenSSL, but probably will be within the year).

There was a question about what is the biggest issue with FIPS validations? The general answer: consistency. That is, OpenSSL helped take multiple validations through at the same time, which was based on the same code and mostly the same documentation took a different amount of time and got completely different feedbacks.








ICMC17: Inside the OpenSSL 1.1 FIPS Module Project

Tim Hudson, CTO Cryptsoft and Mark Minnoch from SafeLogic.

In July 2016, OpenSSL announced the commencement of a fresh attempt to do a FIPS validation of OpenSSL. There are over 244 validated products on the NIST list that obviously use OpenSSL in their validation boundaries and it's included in most (all?) operating systems - it's pervasive!  So, why is it so hard to validate?  It starts out as open source with lots of competitors/stakeholders interested in it.

Unfortunately, stakeholder goals and project goals do not always align. For example, the project wants to support many platforms - stakeholders want to focus on only one or two. The same goes for the number of algorithms supported and validated.

Previously, FIPS140 work was effectively entirely funded for the OpenSSL project from 2009-2014, as there was no long term or major sponsor at this time.  The sponsors funding OpenSSL FIPS all had different goals (other than wanting to sell into the US government), which made it very difficult to manage. This is a hard project, with many people yelling at you with different goals and wasn't very rewarding - can't just expect people to do this for "fun". [note: yes, nothing about FIPS is "fun" - practical, yes, but not fun]

The first validation was very painful for the developers, so OpenSSL knows they have to do it differently if they are ever going to do it again.  OpenSSL started their first FIPS 140-2 validation in June 2002, certificates were not received until March 2006!

There have been a total of 9 unique validations, to keep up with new hardware platforms and implementation guidance changes.

The OpenSSL FIPS 1.0 module based off of the OpenSSL-0.9.x is no longer usable, there is still a bit of life left in OpenSSL FIPS 2.0 module (#1747, #2389, #2437) as it is based off of the OpenSSL-1.0.x code. But, a major update is required for a new OpenSSL FIPS module to work with OpenSSL-1.1.x.   For this go round, goal is to make the FIPS140 related changes "less intrusive".

Current validations cover dozens (hundreds?) of platforms (OS vs hardware).

For the new validation, the only current sponsor is SafeLogic, but additional sponsors are needed to fund OpenSSL FIPS development and FIPS lab testing - resources are available now to begin work. 

This is a high risk validation, many people will be watching the validation which means people are cautious to enter - which creates a longer timeline. Keep in mind that TLS 1.3 is only available in OpenSSL 1.1.x, so if that's important to your customers, consider helping out financially to get this project going.

It's hard to get the sponsors on board, as they all want to see another sponsor already on board and to share the cost, but they still want to wield great influence over the work.

If this project doesn't happen, there are fewer options for FIPS libraries and will require you to do more of your own FIPS work.  Taking multiple versions from different companies through CAVP/CMVP is a waste of their resources as well.  Also, if everyone develops independently, the federal government will end up with inconsistent implementations.

Originally team was going to work on FIPS 140-2 work before TLSv1.3, but swapped the priorities. That was easier to get a sponsor for, as it's well defined project and now the FIPS work can happen with the TLSv1.3 in place.

OpenSSL has refactored algorithm testing approach and want to better support embedded systems, and do better with entropy generation.  Need to pick up extra NIST work and try to take SHA3 through CAVP/CMVP.

Will continue to look at improvements to POST (like defining what it means for software). Also considering add ChaCha/Poly1305.

Currently cannot commit to many requested features, just due to trying to keep a reasonable timeline.  The current schedule estimate from "start" to certificate is 18months, based on their experience taking other modules through.

Please consider sponsoring this project so it can get off of the ground!




ICMC17: Keynote: Driving Security Improvements in Critical Open Source Projects

Nicko van Someren, CTO, Linux Foundation.

Open Source is huge and it's here to stay, with nearly 4 million contributors world wide, 31 billion lines of committed open source, etc - we aren't getting away from it now! Open Source is the "roads and bridges" of the Internet, which runs on Open Source.

Sometimes open source breaks... things like heartbleed, shellshock, Poodle, etc. The Internet runs on opensource, but it's not always properly looked after. Linus's Law: "Given enough eyeballs, all bugs are shallow" - so why are there still bugs? Well, not enough eyeballs!

Open source software is not more or less secure than closed source - but different. Typically there are more diverse group of people working on the source, but serially over a long period of time. There is often a culture of "code is more important than specification" - a cultural difference from most businesses.

Major projects are very under resourced, like OpenSSL - run  by millions of businesses, but only got $2000 in support in 2013.  NTPD is run by every major stock exchange, but some of the code is 35 years old, maintained by one guy, part time.  Same for bash, GnuPG, and OpenSSH.

These open source projects are not  given the resources they deserve.

The Linux Foundation created the Core Infrastructure Initiative. The CII aims to substantially improve security outcomes in the OSS projects that underpin the Internet. The CII funds work in security engineering, security architecture, tooling and training on key OSS projects.

This market is changing quite quickly as well - who would've known 4 years ago how important node.js would be?

CII is a non profit funded by industry partners, like Intel, Microsoft, Google, Hitatchi, Dell, Cisco, Amazon, Bloomberg, Fujitsu, etc.

Open source can do all of the same things commercial enterprise does for building secure software - just harder, because there is no way to give a top-down mandate (ala Bill Gates fixing security mindset at Microsoft).

Groups and individuals must think about security early and often, it cannot be just one squeaky wheel mentioning security. It requires buy-in from the entire community. Fostering this culture of security within your open source project is the single most important thing that you can do to improve your security outcomes.  Security needs to be given equal weight with scalability, performance, usability and other design factors.

CII is trying to find out where the risks and problems are by doing the CII Census Project to discover the really critical open source projects, how responsive the developers are, historic trends for bug and vuln density and how healthy the development community is. Did a snapshot a couple of years ago and created a scorecard. working now on updating it to be a continuous evaluation.

Once critical projects have issues identified, CII is trying to focus their resources on fixing it. Maintenance work is not fun, but it is vital. They are trying to pay developers to work on key projects full time, match willing and able developers to relevant projects and encourage educational establishments to get students involved.

Additionally, working on improving open source security tools. This means funding development of new or improved OSS security tools, make sure they are usable and have a good signal to noise ratio. Problem with some of the existing tools - terrible documentation! So, there is even a need for paying people to write documentation for how to use and deploy continuous security testing.

CII also wants to drive better security process in OSS projects with their CII Badge program - an open process for evaluating security processes in your community. It's a self assessment, with the goal of avoiding security theater, so it only includes items that really improve security.

CII has a travel fund to send developers to security conferences to learn about security and additional funding to get key OSS developer teams to meet face to face to set priorities and collaborate (like OpenSSL).

If your company is building your business on open source software, you should consider funding those projects and CII to help push better security practices, etc.