Wednesday, November 15, 2006

Simple security design review steps

Know the application
Every computer application, no matter how complex, consists of components that lie in the following categories
a) Processes
b) Data channels
c) Data stores
d) Interactors
[Readers familiar with DFDs or data flow diagrams will instantly recognise that these categories.] Start out by decomposing the system into smaller components and start creating DFDs of the same. Creating the DFDs will help the reviewer in the following way:
  1. Will give you a deeper understanding of the system.
  2. Will force application teams to think about their application and stimulate security discussions even before the security review begins!
Create the DFDs until you (the reviewer) have a fairly good understanding of the system. I would recommend that you draw these diagrams 'online' with the app teams. This allows the app team to correct your thinking as you move on. Dont strive for 100% familiarization with the application.

Identify Threats
Behavioural Tips
Do not adopt an extremist stance when identifying threats. Do not give the app teams the impression that you are policing and judging their application. Remember, you goal is to prise out as much as possible from the app team. They are your best source of threats! Dont start out with "Do you realize that your application can be attacked in this manner?". Instead say something like, "Lets try and find out how the application responds to this threat.".

Technical Tips
I have come to realize after no matter what your threat, they will always lie in one or more of the following categories:
  1. Loss of confidentiality - Data ends up getting read by someone other than the intended user.
  2. Loss of integrity - Data has been tampered with and the system (or user) can no longer trust the data.
  3. Loss of availability - The application has gone for a toss and is no longer providing the expected quality of service (sluggish or totally down system)
  4. Damage of the contract between a user and his credentials - This manifests in the form of repudiating (user denying having carried out certain operations), impostoring (user forging as someone else or worse still, getting through without authenticating to the system)
Use these categories to identify threats in the system.


Friday, October 27, 2006

Security considerations for user authorization in applications

  1. There exists a clearly defined matrix defines access permissions on resources by different user roles and processes.
  2. Access to sensitive and system resources is restricted to selected roles only.
  3. Authorization is based on Windows authentication
  4. The system uses ACLs (in addition to other) techniques for authorization.
  5. The system uses application-level, (in addition to other) techniques for authorization.
  6. The system uses code access level (in addition to other) techniques for authorization. (e.g.. CAS in dot net)
  7. The authorization mechanism of the application achieves from organizational changes and movements e.g.. Richard is in one role today, another tomorrow. In tomorrow's role he is less privileged than today's.
  8. Identity information is passed without specific protection from one part of the application to the other, only if the sender and the receiver reside in trusted zones.
  9. Identity information is passed by signing/encrypting from one part of the application to the other, if the sender and the receiver reside in non-trusted zones.
  10. Authorization has been performed in the UI tier such that controls are hid/shown based on user authorization. If this is true check the following: the logic for the above is consolidated in a single section of the design documents. Where the behaviour of a control is consistent across pages, a repeater control is used.
  11. The MVC pattern is used for the application.
  12. If the number of roles is less and/or if the pages for each role differ to a large extent then separate UI should be created.
  13. Authorization has been performed in the UI tier such that flow of UI is changed based on type of user
  14. Authorization has been performed in the UI tier such that access to the entry page of the app is configured for authorized users only.
  15. Authorization has been performed in the UI tier such that authorization check is carried out each time a new page is loaded.
  16. Authorization has been performed at the business logic.
  17. Authorization has been performed at the database level.
  18. Access to database tables and other entities is being controlled using SQL Data Definition Language such as GRANT, DENY and REVOKE.
  19. The database rules are being used to enforce security and authorization.
  20. The architecture takes into consideration that no more data is read other than what is required to be displayed to the user.
  21. All operations that perform a business process are authorized.
  22. Authorization functionality is encapsulated as utility classes. (This will obviate the need for non-security minded programmers from having to learn too much about authorization)
  23. Guards are present to ensure that sensitive data cannot propagate outside the data tier.
  24. "If the authorization logic is simple, checks are being made in stored procedures.
  25. If the authorization logic is complex, checks are being made in data access logic components and call SPs."
  26. If an authorization cache is being used then it is protected by encryption (for confidentiality) and is signed (to prevent tampering)

Security considerations for session management in applications

  1. Authentication cookies are protected in transit by using SSL
  2. The contents of authentication cookies are encrypted.
  3. A session timeout has been factored in the design of the application
  4. Session ids generated for tracking sessions should not be guessable numbers (e.g.. First user who visits the site gets session no. 1, the second user gets 2 and so on.)
  5. Session ids are not reused for a long cycle.
  6. Design supports an elaborate mitigation for session hijacking attacks.
  7. Use one session token with two values during authentication. One value before authentication and one after.
  8. The system does not rely too much on persistent cookies
  9. Guards are present for confidentiality and integrity of cookies.
  10. Are sensitive cookies marked as "secure"
  11. Does the application rely on IP filtering for security?

Security considerations when auditing and logging in applications

  1. The design has a standardized approach to exception handling across the application.
  2. In the case of an exception minimum amount of information is returned to the user.
  3. Lowest level exceptions are encapsulated into a relevant exception for the benefit of the above tiers of the application. (For e.g.. Instead of telling that a certain row/column of the database could not be accessed, it is better to inform a plain "access denied")
  4. Where user actions are being logged, private data should not be written to the log. (e.g.. Changed passwords, critical settings etc.)
  5. The key parameters to be logged and audited have been identified.
  6. The application has levels of auditing and logging.
  7. Application logs have been protected from tampering.
  8. Application logs have been protected from unauthorized access.
  9. Utilities have been factored in for interpretation of log files.

Security considerations when handling sensitive data

  1. All data along with their criticality have been identified.
  2. A matrix indicating the data and the means to secure it is available in the design document.
  3. All data required to be kept confidential is encrypted.
  4. All data that absolutely must not be tampered is digital signed using private key or using HMAC.
  5. Private information (secrets) is not persisted to disk until necessary.
  6. Private information (secrets) is kept in memory for only as longs as it is necessary.
  7. Private information (secrets) are not stored as literal values in code (no hard-coded values)
  8. Database connection information such as user name and password are not stored in plaintext on disk.
  9. No sensitive data is stored in persisted cookies.
  10. No custom-built algorithms are being used to encrypt data.
  11. If standard algorithms are being used, then the library used to implement them are tested sufficiently. (Watch out for algos like AES, RSA etc. implemented by local app teams)
  12. If standard algos are being used, there is a documented rationale for chosen key sizes.
  13. The encryption key is stored in a secure manner.
  14. There is a provision in the application for changing the keys used for encryption.
  15. There is a provision in the application for rolling over data (i.e. data encrypted with "old" key to be re-encrypted using "new" key)
  16. There is a provision in the application for handling scenarios like "lost key", "lost password to key" and "key compromise"
  17. Sensitive data is not transmitted using GET, as it can be directly seen in the browser address bar.
  18. Sensitive information is not being sent in the HTTP headers as these can be easily changed.
  19. Audit logs are encrypted (if required) , but should be definitely be protected against tampering and loss of integrity.
  20. Where possible, WS-Security or SSL/TLS is used to protect ephemeral data rather than implement crypto schemes out of primitives.
  21. For .NET code, System.Security.Cryptography namespace will be used.
  22. For Java code, JCE providers will be used (Bouncycastle or SunOne)
  23. For C/C++ code CryptoAPI (CSP) will be used
  24. For scripting, CAPICOM will be used
  25. For Windows kernel mode, statically linked version of RSA32.LIB will be used."
  26. The crypto algorithm being used will be easily replaceable. Hard coding is not being done. It is easy to upgrade the algorithms in the future.
  27. If the user of the software does not specify it, strong algorithms are used by default. The software will not "automatically" fall back to a weak crypto.
  28. If symmetric key-based cryptography will be used, then the CBC mode will be used.
  29. If hash functions will be used, then the SHA-2 family of hash functions (SHA-256, SHA-384 or SHA-512) will be used.
  30. The application will DP API (Data Protection API) to store secure data and passwords.
  31. If the application requires random numbers, a strong quality random number generator will be used.
  32. If a secret key is required to be generated from a user password, the user password should not be merely hashed. Instead a KDF (Key derivation function) should be used to derive a key from a password. (Eg. CryptDeriveKey on Windows)
  33. Are cache-control : no-cache or cache-control : no-store used to prevent caching in browser?
  34. Application caters if 128-bit SSL is not supported in the web browser.

Tuesday, October 24, 2006

Windows registry application security best practices

Use the following appsec best practices when dealing with the Windows registry.
  1. Use of registry reduces application portability. Therefore, use only if required.
  2. Don’t use the registry as a configuration trash–bin.
  3. Don’t store secrets in registry.
  4. Encrypt application data stored in the registry.
  5. Discourage users from directly editing the registry.
  6. Perform input validation on data read and written to registry.
  7. Don’t write data to HKLM. Reading back the data will require the user to be logged on as administrator as by default only Read-access is provided to HKLM all users.
  8. Don't open registry keys for FULL_CONTROL or ALL_ACCESS.

Tuesday, October 17, 2006

List of security regulations to comply applications to

There are the following regulations that applications have often to comply to:
  1. International Standard - ISO 17799
  2. California AB 1950 and SB 1386 - Personal Information Privacy
  3. Children's Online Privacy Protection Act of 1998
  4. Director of Central Intelligence Directive series
  5. Regulation E - Electronic Fund Transfer
  6. General - EU Directive Applicability
  7. Federal Information Security Management Act (FISMA)
  8. The Gramm-Leach-Bliley Act (GLBA) - Act of 1999
  9. The Health Insurance Portability and Accountability Act (HIPAA) of 1996
  10. International Standard - ISO 27001
  11. Japan's Personal Information Protection Act
  12. MasterCard Site Data Protection Program (SDP)
  13. North American Electric Reliability Council (NERC) Critical Infrastructure Protection Committee (CIPC) Security Guidelines for the Electricity Sector
  14. OWASP 10 Most Critical Web Application Security Vulnerabilities
  15. Payment Card Industry Data Security Standard (PCI)
  16. Personal Information Protection and Electronic Documents Act (PIPED Act)
  17. The Privacy Act of 1974
  18. Safe Harbor
  19. SANS Top 20 Internet Security Vulnerabilities
  20. Securities Exchange Act of 1934
  21. Sarbanes-Oxley Act of 2002
  22. Title 21 Code of Federal Regulations (21 CFR Part 11) Electronic Records
  23. UK Data Protection Act 1998
  24. Visa Cardholder Information Security Program (CISP)
  25. WASC Web Security Threat Classification
  26. BASEL II

Thursday, October 12, 2006

Temporary Files Security In-depth

Introduction
Many applications require to create and maintain temporary files. Often these temporary files are created without the enduser knowing about the same. Security attacks realized due to insecure temporary file management is a critical category of security attacks on software applications. Application developers are required to follow certain security best practices when creating temporary files. In this article I shall discuss these best practices.

Vulnerabilities due to poor tmp file implementations
Attack#1
victim.c:
filename = mktemp(template);
fd = open(filename, …);

But an adversary can create a file with the same name between the two statements.
Then, victim.c will either end up opening the adversary’s file, or will fail to create the temporary file itself.

Attack#2 Symbolic Link Vulnerability
If the attacker knows where the application creates its temporary files and can guess the name of the next temporary file, the following attack can be realized:
- Attacker will put a symbolic link at the temporary file location.
- The attacker will link the symbolic link to a privileged file.
- Now, the application will unknowingly write to the privileged file instead of writing to the file in the temp directory.

Security Considerations when designing Temporary File modules
1. Avoid temporary files altogether
Temporary files often end up creating more problems than solving them. The effort (time/money) required to develop a temporary file management module often outweighs the features that get added to the application.

2. Reasearch the platform support for temporary files
Before starting out to code the temporary file generation module, assess the existing support for file generation on the target platform. For example, Windows has the GetTempPath() API call that provides the default temporary directory path.

3. Ensure file name uniqueness
The filename of the temporary file must be unique. This ensures that the application does not end up clobbering any existing data on the disk. If a file having the same name already exists on the disk, the logic of file name generation should generate (see next point) a new file name and use that instead.

4. Ensure file name randomness
When generating the file name of the temporary file ensure that the name is not guessable. Typically, the default APIs that are supplied by the operating system for generating temporary files create filenames containing monotonically increasing integers. Therefore it becomes possible to predict the filename of the next temporary file that the application will generate. Use cryptography to generate unique file names. For eg. the CryptGenRandom() function may be used to achieve this.

5. Ensure proper permissions for the temporary file
Ensure that the temporary file have the appropriate ACLs (access control lists) set on them. Avoid publically writable temporary directories if possible. If using a publically writable directory, make a directory within the publically writable directory for temporary files, with read and write permissions for the application only. Temporary files are often used to hold intermittent state information about the operation in progress and may contain confidential information.

6. Ensure secure cleanup of temporary files after usage
One of the most common attacks on applications that use temporary files is the recovery of previously deleted temporary files from the disk. This is trivially possible with the help of software available on the Internet. In order to mitigate against this operation, shred temporary files. Depending on the sensitivity of the information contained in the temporary file, ensure that the cleanup is commensurate with the security levels desired.

7. Prevent covert access
Sometimes the application's temporary files containing sensitive information may be indexed by the underlying Operating System service that may be active on the user system. For example, the indexing service on Windows, when active, silently builds an index of all files on disk. It is possible that sensitive information will end up getting indexed so that a malicious user may use the Search Files/Folders feature to obtain application intelligence.

8. Dont use dangerous functions for temporary file generation
Dont use mktemp(), tmpnam(), tempname(), tmpfile() for generating temporary files. Also don’t reuse the parameter in mkstemp(f) in any other function call, such as stat(), chmod(), etc because the same name may refer to a different file.

9. Avoid storing very sensitive information in temporary files
As a rule, avoid storing sensitive information in temporary files. This is a very common reason for most attacks on applications. An application may employ sophisticated security safeguards such as encryption for securing its data but between encryption may be unwittingly storing sensitive information to an unprotected temporary file. Avoid this at all costs.

10. Rely on absolute file paths and file handles
When building the file paths of the temporary file, use absolute paths. Do not use relative file paths. Also, if the directory path where the temporary files are being housed is being accepted from the user, ensure that it has been sanitised. To prevent time-of-check-to-time-of-use (TOCTOU) attacks, ensure that you use file handles (and not file paths) for future access to the temporary files. Many a vulnerability have been imputed to applications using relative file paths for temporary file access.

11. Securely create temporary files
If open() is being used to create a temporary file use the O_CREAT|O_EXCL flags. If the Windows CreateFile() is being used to create a temporary file use the CREATE_NEW attribute. Calling the APIs with these flags ensures that these APIs fail if a file is already present with the same name in the temporary directory.

Conclusion
Temporary files are sometimes very useful for the application developer. However, improper implementation of temp file modules leads to the realization of several attacks on the application. The application developer must use the techniques contained in this article to mitigate against these security issues.

Thursday, October 05, 2006

When to consider cryptography in your application

Cryptography DOES provides the following services to applications:
a) Confidentiality
Prevents application from being read and disclosed to everyone except for the intended recipient. This is achieved using encryption.

b) Authentication
Provides techniques using which the sender of a message (or originator of the data) can be authenticate reliably. This is achieved using message digesting and encryption.

b) Integrity
Provides a tamper-detection and tamper-evident technique for detecting if the message has been tampered since it was first generated. This is achieved using HMACs and digital signatures.

c) Non-repudiation
Provides a means for preventing an entity from denying an operation was carried out by that entity by providing conclusive proof. This too is achieved using digital signatures.

d) Replay protection
Provides techniques that can be used to prevent a previous message from being replayed to recreate the desired operation. This is done with a combination of message digesting and timestamps.

Cryptography DOES NOT provides the following services to applications:
a) Denial of service protection
Cryptography represents an operation carried out on data. It does not (and cannot) prevent DoS attacks.

b) Preventing Eavesdropping
c) Providing access control

Friday, September 29, 2006

AppSec Best Practice#2 - User Authentication using Passwords

In this post, I have described the accepted secure manner in which web applications must authenticate their users.

Using SSL in your applications - think again

We are used to the padded lock that appears at the bottom of the browser when visiting sites having the https:// prefix. That's a visual cue to indicate that SSL is active for that web session. SSL is used by web application architects as a security mechanism to protect data. However there are a few limitations to bear in mind when relying only on SSL for your application security needs.

1. SSL operates at the transport level not at the application level
This simply means that do not use SSL to protect persistent data on your PC. SSL protects data on when it is on the wire between the client and the server. The data is decrypted after it has reached its destination. Consider this example - You enter in your credit card number on a web page. Assuming SSL is active, the credit card number is encrypted before leaving your PC. The encrypted credit card number travels protected over the wire right upto the web server, at which point it is decrypted by SSL. Thereafter the data is in clear. It is the responsibility of the web application to protect it after that point.

2. SSL protects either all the data or none at all
As mentioned in the previous point, SSL operates at the transport level. This means that applications do not get to decide what they wish to encrypt and what they dont. This can have performance impacts in some cases as I have found that a site that uses SSL through out its pages turns out kind of slow. If SSL is active, all data is protected. Period.

3. SSL does not provide non-repudiation services
Non-repudiation means the ability to provide irrefutable evidence that a certain operation had been carried out. SSL does not, and cannot, provide that service.

4. SSL is not always effective for securing web services
Web services are not always front-ended by web servers. There often arises a scenario in which the application directly communicates with the web service (without intervention of the web server). In such cases, SSL does not help.

Monday, September 04, 2006

C/C++ CodeSec QuickTip#1 Memory Management

Applies to
Whenever some memory is being allocated using new, for example.

What to Check For
Ensure that delete will be called properly. Ensure that all exceptions are being caught for code following the new. Consider the following code:
int * myint = new int;
//some work - with no exception handling
delete myint;
What will happen if an exception occurs in the second line of code? The delete will not be called and a memory leak will exist.

Why
Although you may take great pains to match new and delete, the delete may end up not being called due to very different reasons.

How to Check
1. Search for all locations in code where memory is being allocated.
2. Identify how the corresponding memory is being deallocated.

How to Fix
If ever you need to use new and delete, do ensure to new in the constructor and delete in the destructor. This is the only guarantee that the memory will be freed.
If you cannot always do a new in the constructor, then ensure that there arent any alternate code paths. For example, change of logic in code that prevents the deallocation code from executing. Another example (as described above) is when an unhandled exception occurs and the deallocation code is altogether skipped.

Problem Example
int * myint = new int;
//some work - with no exception handling
delete myint;

Solution Example
int * myint = new int; //FIX: Move this code to the constructor
//some work - with no exception handling
delete myint; //FIX: Move this code to the constructor

Thursday, August 31, 2006

Canonicalization Attacks

What are canonicalisation attacks?
Unauthorised access of file and directories on the web server machine by tampering file/directory paths that a web site normally allows users to enter as part of its functionality. The attack is typically carried out by entering the path of the file in input field on a web page or by supplying it as part of the URL.

What are the consequences?

Loss of confidentiality, integrity and a denial of service results if files are deleted.

What files can the attacker access?
Any file or folder on the disk(s) of the web server m/c.

Defending applications against canonicalisation attacks
- Administrative Controls
a) Ensure that the web server hosts on a secure file system like NTFS.

b) Set ACL (access control lists) on files and folders. This can be done by setting appropriate permissions in the [Security] tab in the [Properties] tabpage of files and folders. Ensure that only administrators can access sensitive files and folders.

c) Do not keep sensitive files, source code or any such material on the web server machine.

d) Turn-off MS-DOS file name (8.3) convention on the machine by adding the following setting to the HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Control \FileSystem registry key: NtfsDisable8dot3NameCreation : REG_DWORD : 1.
Note that this option does not remove previously generated 8.3 filenames.

- Programming Controls
a) White-list directories that you would like to have your application access rather than black-list them.
BAD WAY:
string InputFilePath = GetPathFromUser();
if ( InputFilePath = = "Secret Directory")
Output ("Access Denied")

CORRECT WAY
string InputFilePath = GetPathFromUser();
if ( InputFilePath startsWith "Application-accessible Directory")
allow Further operations...
else
Output ("Access Denied")

b) If ACLs have been set (Point b in Administrative Controls, above) then turn on Integrated Windows Authentication (in IIS) and impersonate using the WindowsIdentity class in your .NET code.

c) Filter the user input path by subjecting it to MapPath in .NET. MapPath( ), according to MSDN, maps the virtual path in the requested URL to a physical path on the server . To prevent the path from mapping to a path in another application on the same server, set MapPath's third parameter to false.

d) Use regular expressions to control the file\folders that can be accessed. This can be implemented in a) above.

e) Reduce UTF-8 to its canonical form. UTF-8 text can be represented in multiple forms - guard against this.

Wednesday, August 30, 2006

HTTP Fingerprinting

What is HTTP Fingerprinting?
HTTP Fingerprinting is a technique that helps determine the following:
a) The web server software hosting the website
b) Version and other deployment details of the webserver

How does HTTP Fingerprinting help?
Depends on which side you are looking from. From a bonafide perspective, HTTP fingerprinting allows n/w administrators to profile the webservers in their environment and monitor patches. It also allows an pen-tester/security auditor to narrow down the list of attacks that the server must be subject to expose vulnerabilities.

Why is HTTP Fingerprinting possible?
Try this. Ask a programmer to implement a string comparison function. Provide the flowchart that details the logic. Now ask another programmer to do the same. Provide the same flowchart to this programmer too. You can be sure that the implementations, although accurate, would be dissimilar. The same goes for the way in which HTTP web servers are implemented. There are several vendors in the market today viz. Microsoft, Apache, Netscape and the list goes on. The web server implementations from each of these vendors have their own nuances and subtleties in which they implement the HTTP protocol. This, unfortunately, is the reason why HTTP Fingerprinting becomes possible!

How does one go about HTTP fingerprinting?
- Use banner grabbing
Try the following,
(i) run telnet IP_Address 80 at the command prompt. Substitute IP_Address with the IP address of the machine hosting the web server.
(ii) Type in the telnet window
HEAD / HTTP/1.0
(iii) Press Enter.
(iv) Press Enter again.
If all runs fine, what you should see is the web server banner! Feast on the information that you will see. You should be able to determine the following:
- The default home page configured for the site
- The last time the page was modified
- The web server running along with its version
- The time on the server
...and lots more.
Banner grabbing allows an attacker to get vital information about the web server software running on the box. It allows script kiddies (and determined hackers) to narrow down to the Achille's heel of the website. The other things you can do is best left to your imagination!

- Difference in HTTP implementations
This involves subjecting the web server to different HTTP messages and observing the responses. These responses are then compared to expected responses from the corresponding web servers. Matches will indicate a correct recognition of the web server.
Illustrating this point, Microsoft IIS 6.0 when subject to a HEAD / HTTP/1.0 emits out a response in which Server and Date are contiguous. The same is not seen for other web servers. More examples can be found at http://net-square.com/httprint/httprint_paper.html

How does one prevent HTTP fingerprinting?
- By changing the HTTP server banner string to something obscure or misleading

- Transposing the HTTP headers so as to remove any points of distinction

- Using custom HTTP error codes such as 404 or 500

- Using HTTP server plug-ins available that allow you to do some of the above

HTTP Fingerprinting Bottomline
HTTP fingerprinting remains the "entry-point" for a user (whatever his/her intentions) and offers him/her a clear line-of-sight perspective. HTTP fingerprinting also remains a necessary evil.

Wednesday, August 16, 2006

Encryption considerations in software applications

Encryption is the new buzzword that is often recommended as the panacea for most security ills. I do not disagree, however, there are a few caveats that you need to consider before using encryption:

1. How will the encryption keys be generated?
A primary requirement when generating cryptographic keys is that they should be random or unpredictable. Unfortunately, the problem with generating random numbers is that you have to supply a sufficiently random number as seed in the first place - a classical chicken and the egg situation. Having said this, however several techniques (to be covered in a future post) exist to generate random numbers.

2. What encryption technique will your application use?
First things first. Hashing, message digesting and digital signatures DO NOT constitute encryption. Encryption means garbling of data with the help of a secret "encryption key" and ungarbling it using a secret (of course) "decryption key".
When encryption key == decryption key, this is called Symmetric Key encryption.
When encryption key != decryption key, this is called Asymmetric Key (or Public Key) encryption.
You need to decide first whether your application will use symmetric or asymmetric key encryption. Each of these have their +/- but you need to decide which one to use. [Note: When you use asymmetric key encryption, you often end up using symmetric key but that's another story]

3. What encryption algorithm will you use for encryption?
Algorithms are nothing but the sequence of steps to be carried out when encrypting data and decrypting (usually the reverse) previously encrypted data. Enough has been written about not using home-grown encryption algorithms, so lets assume that you are planning to use a standard and tested algorithm. Here's what I recommend -
Symmetric key encryption - Use AES (Rijndael - pronounced "Rhine doll")
Aysmmetric key encryption - Use ECC, else use the prolific RSA algorithm.

4. What are the recommended key sizes to be used?
This is a function of the application's security requirements. If you are protecting low-worth data you can can settle for smaller key sizes than when protecting high-worth data. But there are two important points to remember:
  1. Key sizes for symmetric and asymmetric algorithms vary greatly from each other.
  2. When using a both symmetric and asymmetric algorithms, ensure that the crack resistivity provided by the combination is equivalent or higher than that required by the application.

What is a security pattern?

Security patterns are nothing but established ways of implementing security features in applications. Lets try and understand why we need security patterns. More than often, developers are confronted with a situation in which only the application features to be implemented are given to them, leaving them with the onerous task of implementing those features. Now there is nothing wrong with that, except that security of the feature take a back seat. Lets take a simple example. Say, you have been asked to develop a user login module. This module accepts a user name and password from the user and authenticates the user against the password stored in database. When the developer begins to code this feature he will naturally focus only on the functionality and the means to the end. He will care little (and it is not in his interest to) about the security best practices to be followed, both in design and coding. Clearly something is missing. Consider the following. What if...
the developer had a pre-existing security design that he could use for implementing a feature? A design that was resilient to the possible attacks on his module and that incorporated globally-accepted best practices.
he was not required to worry about the "hows", "whys" and "whats" of security for that feature?
Security patterns come in and fulfil that need. [Btw, this blog contains several such security patterns and best practices that you can use to develop more secure applications. Have fun.]

Friday, August 11, 2006

Application Database Security


In this post, I shall discuss the significance of database security and the security gotchas to consider. [I shall discuss the steps to take to mitigate these security risks in a separate post.]

Refer to the figure above. Note the following when viewing the figure:
a) The lower two machines viz. the bonafide client and the bonafide server represent the trustworthy systems.
b) The upper two machines represent bogus machines either physically or logically placed in your application environment.
c) Security considerations appear in red circles with numbers in them. viz. 1 through 6.

Security Considerations
(1) Database client subversion
(2) Database client impersonation (masquerade)
(3) Database server subversion
(4) Database server impersonation (masquerade)
(5) Vulnerabilities related to data flowing between the client and the server
(6) Vulnerabilities related to data stored on the database server

Thursday, August 10, 2006

AppSec Best Practice#1 - User Account Lockouts


User account lockouts should be designed with caution. Take a look at what you must do - a programmers perspective. Note that this is agnostic of application types viz. web, desktop etc.

Single Sign-On (SSO) and SAML (Security Assertion Markup Language)

What is SSO?
Single Sign-On (SSO) requires a user to authenticate himself to a service one time and does not require reauthentication for other services of the system linked by the SSO framework.

SSO addresses a common issue – that of requiring users to manage and remember authentication credentials (usually a username/password pair) for every service or application they have been subscribed to.

SSO requires that users need remember only one set of authentication credentials. This set of authentication credentials is “passed-on” to other SSO-enabled services (or applications) so that the user can use them transparently without having to reauthenticate.

How is SSO typically implemented
You can implement SSO using your own bespoke SSO logic implementation or you can use a standards-based technique. Either option has it’s own advantages and disadvantages.
Advantages of bespoke programming means that you have a lot of versatility and you are in the driver’s seat when deciding the level of SSO you are looking at. There are disadvantages too, however. Chief among them are reliance on in-house expertise (which is often not available or insufficient) and lack of scalability, extensibility and performance.
On the other hand if you adopt standards-based techniques you are assured of an industry-accepted solution which augurs well for the reliability, scalability, extensibility and performance of the application. Disadvantages, typically, are associated with implementation teams having to learn “another new” standard and code to the specification, although one may counter-argue that implementations may already exist in the market.

Techniques used for SSO
Proxy-based SSO and SAML-based SSO are two most common techniques used for SSL implementation. This article does not go into the details of proxy-based SSO.

Enter SAML
SAML is yet another acronym for you to remember. It stands for Security Assertion Markup Language. The ‘ML’ in the name gives away that SAML is XML based.
Here’s the single important reason why applications need SAML – SAML allows seamless inter-domain sharing of security information. This was not easy before SAML was created.

Thumb-rule for determining if my application requires SAML
This answer is answered by asking this simple question:
Does your application, either currently or in the near future, have a business need for offering a seamless user experience of service usage across business partners and other 3rd party service providers?
If the answer to the above question is ‘Yes’, your application needs SAML

Some advantages afforded by SAML?
1. Platform neutrality
SAML abstracts the security framework away from platform architectures and particular vendor implementations. This makes security more independent of application logic which is an important tenet of Service-Oriented Architecture.

2. Loose coupling of directories/databases
SAML does not require user information to be maintained and synchronized between directories/databases.

3. Improved online experience for end users
SAML enables single sign-on by allowing users to authenticate at an identity provider and then access service providers without additional authentication. In addition, identity federation (linking of multiple identities) with SAML allows for a better-customized user experience at each service while promoting privacy.

4. Reduced administrative costs for service providers
Using SAML to "reuse" a single act of authentication (such as logging in with a username and password) multiple times across multiple services can reduce the cost of maintaining account information. This burden is transferred to the identity provider.

5. Risk transference
SAML can act to push responsibility for proper management of identities to the identity provider, which is more often compatible with its business model.

OWASP, Mumbai Chapter - 2nd Meet - 31-July -06

I spoke on the Significance of Random Numbers in Application Security. I started off with the practical usage of random numbers. I explained how good random number generation prevents applications from malfunctioning, increases strength of cryptographic operations which in turn increases entropy associated with the key.
I went on to explain how random numbers automate otherwise manual tasks and how it increases the security of application. Explaining the concepts of entropy and seeds I explained the level it should be reached in an application. Finally, I spoke about the various sources of random numbers.I also showed developers the simple mathematics required to calculate minimum password lengths, given the security requirements.

You can find my presentation here.

OWASP, Mumbai Chapter - 1st Meet - 24-June-06

I presented on Secure Coding Fundamentals and elucidated the Cost factor inculcated due to insecure code resulting in Network Cost, Productivity Cost and so on. Further explaining the basic reasons of threat to code, I explained how the mistakes done by the Programmers, I/O, API Abuse, Environment & Configuration and Time & State were responsible for Security flaws in an application. Moving ahead, I laid down a few principles to be followed as Secure Coding – General Guidelines for all the languages and specific Secure Coding Guidelines for C & C++, Java and .NET

You can get my presentation here.

Memory Allocation Best Practices in C and C++

Introduction

Tomes (and I'm talking of real big tomes) are available on secure coding in C and C++. They describe the details of the language, why C,C++ are so insecure and coding patterns and anti-patterns. They tell you what to chew and what to eschew. At the end of it all - when you come down to writing code - how many of these best practices do you remember?

The answer to the above questions is best left to your judgement. In this secure programming series, I intend to bring before you collections of programming best practices collected from the following sources:
1. My own experience and the invaluable experience that I have obtained when reviewing source code.
2. Numerous books available on the topic (my favourite being Secure Programming in C and C++ by Robert Seacord). I recently picked up Exceptional C++ and More Exceptional C++ by Herb Sutter and wonder how I did without these ones!

This article gives you tips to follow when allocating and deallocating memory in C and C++. If your code does not follow them, then you run a risk of making your programs susceptible to all types of attacks (describing the attacks does not fall in the scope of the article)

Without wasting any more of your time (or mine) let us dig in.

Secure Memory Allocation Tips Common to C and C++

Tip 1 - Use static buffers wherever possible. The compiler automatically frees such memory.

Tip 2 - Previously allocated memory should be manually freed, after it is no longer required.
Dont laugh, meet someone who's making a switch from Java into C,C++ and you'll know what I'm talking about.

Tip 3 - Given an option to choose between calloc/malloc or new to allocate memory, go in for the latter - use new, dont use calloc/malloc.

Tip 4 - When using C and C++ code together, if new has been used to allocate memory, use delete to free it. Do not use free. Likewise, if malloc or calloc has been used to allocate memory, use free when deallocating. Do not use delete.
Unfortunately, many programmers feel they can get away with using free when allocation has been done by new (and vice versa) because they discovered while debugging that new was implemented using malloc and that delete was implemented using free! Don't fall in this trap.

Tip 5 - Often a function requires to set a buffer supplied by the caller. The length of the buffer may be unknown to the caller so the caller may not know how much memory to allocate before supplying that buffer to the function. In such cases the function should provide a means for the caller to determine how many bytes are required to be allocated.
A common way to do this is by allowing the caller to call the function with a special argument so that it will return the number of bytes the caller must allocate for the buffer.

Tip 6 - When shipping code libraries (or SDKs as they are called) provide wrapper functions that encapsulate new and delete. This helps prevent single-threaded and multi-threaded runtime issues.

Tip 7 - Use unsigned integer types to hold the number of bytes to be allocated, when allocating memory dynamically. This weeds out negative numbers. Also check the length of memory allocated against a maximum value.

Tip 8 - Do not allocate and deallocate memory in a loop as this may slow down the program and may sometime cause security malfunctions.

Tip 9 - Assign NULL to a pointer after freeing (or deleting) it. This prevents the program from crashing should the pointer be accidentally freed again. Calling free or delete on NULL pointers is guaranteed not to cause a problem.

Tip 10 - Compilers are known to vaporise calls to memset() that appear after all modifications to the memory location is complete for that flow. Use SecureZeroMemory() to prevent this from happening.

Tip 11 - When storing secrets such as passwords in memory, overwrite them with random data before deleting them. Need to note that free and delete merely make previously allocated memory unavailable, they dont really 'delete' data contained in that memory.

Tip 12 - An easy way to find out if your code is leaking memory is by executing it and examining its memory usage either using Task Manager on Windows or top on Linux.

Secure Memory Allocation Tips in C

Tip 1 - Ensure that 0 (zero) bytes are not allocated using malloc According to the documentation, behaviour for malloc( ) for this case is undefined.

Tip 2 - Always check the pointer to the memory returned by calloc/malloc. If this pointer turn out to be NULL, the memory allocation should be considered unsuccessful and no operations should be performed using that pointer.

Tip 3 - When allocating an array of objects, remember to free the array in a loop.

Tip 4 - Do not use realloc when allocating buffers that will store sensitive data in them. The implementation of realloc copies and moves around the data based on your reallocations. This implies that your sensitive data ends up in several other areas in memory which you would have no means of "scrubbing".

Secure Memory Allocation Tips in C++

Tip 1 - When allocating collections use
std::vector vt(100,thing());.
rather than
thing* pt = new thing[100];
The vector defined above is clearly defined on the stack and therefore memory deallocation will be handled by the compiler. If the storage needs a longer lifetime, say as part of a larger class instance, then make it a member variable and initialize the storage with assign() when required.

Tip 2- When using new to allocate an array of objects, use the delete [ ] convention when freeing memory. Using delete without the subscript operator [ ] will result in a memory leak.

Tip 3 - Use auto_ptr more often than you currently do when allocating so that deallocation is handled automatically. Remember the following guidelines when dealing with auto_ptrs.

  • An existing non-const auto_ptr can be reassigned to own a different object by using its reset() function.
  • The auto_ptr::reset() function deletes the existing owned object before owning the new one.
  • Only one auto_ptr can own an object. So after one auto_ptr (say, P1) has been assigned to another auto_ptr (say, P2) do not use P1 any longer to call a method on the object as P1 is reset to NULL. Remember that the copy of an auto_ptr is not equivalent to the original.
  • Do not put auto_ptrs into standard containers. This is because doing this creates a copy of the auto_ptr and as mentioned above the copy of an auto_ptr is not equivalent to the original.
  • Dereferencing an auto_ptr is the only allowed operation on a const auto_ptr.
  • auto_ptr cannot be used to manage arrays.

Tip 4 - When using new enclose it within a try-catch block. The new operator throws an exception and does not return a value. To force the new operator to return a value use the nothrow qualifie as shown below:
thing * pt = new (std::nothrow) thing[100];

Finally...

I hope you enjoyed reading these tips. If you did, please vote and rate this article below. I shall wait for your comments and feedback. I will collate comments from all of you and update the article - not to mention - and give you all credits. Please feel free to write me at richiehere @ hotmail . com. Good luck and secure programming!

Securing user enrollments in applications

What is User Registration?
User registration simply means introducing intended users to the software for the first time. User registration is typically a one-time operation. Post-user registration users start using the services of the software. User registration should be given more attention when designing your applications.

Why is User Registration Important?
User registration is important because it...
1. provides assurance that only bonafide users are added to the system.
2. provides accountability by reducing chances of backdoor entry into the system.
3. allows trust to be transferred from the software to the intended users of the software.

What kind of applications require User Registration?
Some examples of applications requiring User Registration are:
a) A customer-service portal for a telephone company
b) An online banking website for a bank
c) An extranet website hosted by a company

What security problems are caused due to poor User Registration?
Some problems associated with poor user registration are:
a) Introduction of ghost users in the system
b) Easy subversion of the user creation logistics
c) Confusing forensic paths which make it difficult to pin down hacking attempts to a process employed by the system

User registration consists of three distinct parts:
1. Create User - Creating users at the software console
2. Link User - Associate physical people with the created users
3. Transfer Control - Handing over initial credentials to the linked physical users.