Tuesday, April 28, 2009

Keep Your Password to Yourself

There is a growing trend to treat passwords as public information that can be shared with pretty much any service. This is being driven by services that want to tap into content stored elsewhere, which they don't directly control. Therefore, they ask users to enter their passwords and may provide assurances taht data won't be stored after use.

Web 2.0

Social networks want to know everything about you: your contact information, friends, etc. But such data can be distributed among several services. For example, e-mail addresses of you friends may be stored in your webmail account, while contact information for your relatives is on a couple of different social networking sites. Various websites try to make it easier to gather all the information automatically for you. Facebook will ask you to provide your credentials for AIM, Gmail, etc. abd use this information to identify a list of your potential friends. That's right, they ask you to give away your login and password to directly access your e-mail account. There is no obvious warning, or mention of a privacy concern. Giving away your password looks mundane, something you should not have to worry about.

Would you give away the login and password to your online banking service? Your Paypal password? No. But asking for access to your your Gmail account, which may contain sensitive e-mail messages is apparently just fine.

Enterprise culture

This attitude toward passwords is not restricted to Web 2.0 sites. In many companies, asking the IT department for help often means giving them your personal password as a standard procedure. That same password that may allow people to change their 401K contribution, access paychecks online, or give a bonus to their direct reports.

It is also common to share personal passwords with co-workers because it is more convenient. Security and ease of use are often in contradiction, and both need to be balanced. But sharing your IT password should never be an option.

Better practices

Especially in the enterprise, a strong message needs to be put forward to let employees know that:
  • A personal password must never be shared, without exception
  • Credentials must always be entered on using encrypted protocol taht does not permit clear text transmission: HTTPS should be used by websites, GPG/PGP/SMIME for e-mails, SSH for remote shell access, etc.
If you need to share a password with a friend or coworker, this password should not be used for any other service.

As for Web 2.0 sites, a few companies have started to offer ways let the user share their data with third-party applications without giving away their password: Facebook Connect, OAuth (Twitter, Yahoo, Netflix, Google), etc. This needs to be encouraged.

-- Julien

Monday, March 30, 2009

Encrypt Everything

Nowadays, most people involved with computer security know that they should use encrypted protocols. With the rapid development of open wireless networks, the systematic use of encryption is becoming increasingly critical. How many people still use FTP or POP3 on the Internet? Too many...

Use Encrypted Equivalent Protocol

Most clear-text protocols on top of TCP as an encrypted counter part. Here are a few examples:
  • telnet -> SSH
  • FTP, kermit -> SCP/SFTP over SSH- or FTPS over SSL
  • Webmail over HTTP -> HTTPS
Additionally, although service proiders may offer encrypted protocols, they do not always make them available by default due to the fact taht encryption does add processing overhead. Webmail providers, for example, will typically default to an HTTP only page. It is up to end users to explicitly requests an HTTPS page when accessing the service.

Add your own SSL encryption

For the TCP protocols, you can always use an SSL tunnel such as stunnel. You can use stunnel in different ways:
  • Install it on a client which does not support SSL to communicate with an SSL-enabled server.
  • Install it on a server which does not support SSL to communicate with SSL-enabled clients
  • Install it on both the client and the server to encrypt any TCP traffic
If your System Administrator tells you he doesn't want to touch that critical HTTP server to enable HTTPS, let him know he can install stunnel to encrypt any traffic coming to port 443 and forward it to the local HTTP port 80. That way you get an HTTPS server without needing to reconfigure the exiting HTTP server.

Use an SSH Tunnel

If using SSL encryption with stunnel is not possible in your environment, you can use an SSH tunnel. Using SSH port redirection (-L option on Unix), you can redirect any remote port to a local port through an SSH tunnel. For example, ssh login@your-domain.net -L8000: allows you to browse your domain.net through SSH by using the address http://locahost:8000/.

You can also use an intermediate machine for your SSH encryption. ssh login@intermediate.net -L8000: would encrypt the traffic between your client and intermediate.net. In this case, the traffic between intermediate.net and would not be encrypted.

Use Tor

Tor is becoming a popular way encrypt traffic. There is a plugin for Firefox to enable/disable Tor with one click, a proxy to redirect any browser through Tor, etc. Also it is a very young project, this might be the most user-friendly solution, especially on Windows. The down side is taht it does increase significantly the latency, using Tor might slow down your web browsing.

- Julien

Tuesday, March 17, 2009

Two common misunderstandings about SSL

SSL is an encryption layer used commonly to secure HTTP (HTTPS), IMAP (IMAPS) and POP3 (POP3S). It is a widely used protocol, but it is not very well understood.

SSL is an additional network layer

In the example of HTTPS, SSL is a layer between TCP and HTTP. That means SSL is not aware of the layer above it, HTTP. You can replace the HTTP data by IMAP, or anything else.

A common misunderstanding is that only the HTTP data is encrypted, not the HTTP headers. If you think of SSL as an OSI layer, it is clear that all of the HTTP content must be encrypted. Layers below HTTO do not recognize the difference between HTTP headers and data.

When the HTTPS session is established, the TCP sessions is created first, then the SSL session, followed by the HTTP session. In other words, the SSL certificate is sent by the server to the client, before any HTTP data is exchanged. If a server hosts several SSL certificates for different domain names, the server would have to send the correct SSL certificate before receiving the "Host" www.mydomain.com" header from the browser.

Since the server cannot decide which certificate to send based on the HTTP context, itmust rely the data from the layers below; namely TCP and IP. From these 2 layers, the TCP destination port and destination IP address are the most interesting.

A server with a single IP address can assign different ports to different certificates: certificate 1 (domain-1.net) to the default HTTPS port 443, certificate 2 (domain-2.net) to port 444, etc. In practice, this is not an elegant solution since any non-standard port has to be embedded in the url: for example https://www.domain-1.net/ vs. https://www.domain-2.net:444/.

The SSL certificate can also be chosen based on the IP address. If a unique server host 4 domains names, it needs 4 different IP addresses, each of them associated to a unique certificate. This is the solution commonly used.

SSL protects the data, not your web server or web application

SSL encryption does not protect against SQL injection, Cross-site scripting, DoS, etc, but it does offer protection against session hijacking, password stealing and other sensitive user information. SSL protects the data, it does not directly protect the application.

- Julien

Wednesday, January 7, 2009

Confidential Data: Gone Surfing!?!

Data's new motion is through the web. Undefined data surfing through web 2.0 technologies have posed the most significant risk. With this in mind it is important to understand what kind of data is unidentified as well as the channels it can escape through.
With data that is identified while it was at rest the web channel poses very little risk. Most solutions today are able to catch items leaving through the web when they have a tag on them. But what about the data that is too new or was not caught in the ID process? This data can be the most damaging. As with all things that are not defined a set of parameters needs to be established in order to catch what you are looking for while reducing false positives. Look for solutions that have real time check sums on credit cards, dictionary thresholds, and AI engines for records and source code to help catch this newer and unmarked data. Many corporate email filters offer these same techniques but almost no one offers it for the web channel due to the latency this type of inspection can induce.
There are 3 main ways data can exit via the Web. The first main way is though web based email. It has always posed a security risk as well as a convenient way to circumvent corporate email filtering. While the inclusion of AV scanning by most free services has dramatically reduced the security issues the issue of data leakage has only grown worse. The adoption of DLP for corporate email has proven that email is the number one way data used to leak. However as corporate email systems have been enhanced webmail has come to be the next easiest way to bypass scanning. Any DLP solution being considered will need to be able to open up SSL webmail and inspect it with the dynamic engines and techniques mentioned above. Be wary of latency with any solution claiming this capability as decrypting SSL is not trivial and can induce a great deal of latency.
The second method of data leakage is through Instant Messaging. IM has also added AV to most file transferring but like webmail, it too is an easy way to leak information. The issue with IM has always been keeping up with the client’s protocols. They constantly morph to avoid aggregator products as well as security filters. It is possible though now to block all but on the approved types and versions you wish running on your network. By minimizing the variations of clients, recording transmission, and key word filtering messages, this medium can be very productive. Many solutions offer this capability but few are deployed due the complex nature of the client’s and their morphing abilities as well as the need for hardware at all internet egress points. This is one area where cloud based offering really make sense by reducing the need for gateway infrastructure. Just make sure the cloud offering you consider really doesn’t need any additional hardware or software at the gateway as many of the so called SaaS vendors do.
The third way is through blogging. User generated content is the fastest growing risk. It is the most recent way that content can remain in motion unfiltered. This includes all of the social networking sites like Facebook, Web 2.0 sites like Wikipedia and blogging sites like Blogger. Anywhere a user is able to upload content through the web it will need to be inspected. There are several classic examples of corporate employees using corporate assets to undermine the company by using this medium to post material that is either confidential or damaging that could have been stopped through the usage of some type of web 2.0 controls.
By knowing the information leak channels, seeing what can be monitored and finally understanding the technology architecture needed to keep up you will be able to create policies for how data is really leaving your corporate assets. Surfing has its place in the web world but not with your confidential data!