Category Archives: webapps security

Web Security: Chrome cookies & security updates.

Google to kill third-party Chrome cookies in two years

Google doesn’t want to block third-party cookies in Chrome right now. It has promised to make them obsolete later, though. Wait – what?

The search engine giant gave us the latest update this week in the journey towards what it says will be a more private, equitable web. It announced this initiative, known as the Privacy Sandbox, in August 2019. It wants to make the web more private for users, it said.

The discussion about online ads and privacy revolves around cookies because they’re what support many predatory advertising models today. It works like this: you visit a website and it puts a small file on your hard drive. This cookie contains information about the session – when you visited, what you looked at, what IP address you came from, and so on.

Some companies use these purely to remember you when you go back so that you don’t have to sign in again. Those are first-party cookies, and they’re a great way to make the web more convenient.

Google Chrome to start blocking downloads served via HTTP

Google has announced a timetable for phasing out insecure file downloads in the Chrome browser, starting with desktop version 81 due out next month. Known in jargon as ‘mixed content downloads’, these are files such as software executables, documents and media files offered from secure HTTPS websites over insecure HTTP connections.

This is a worry because a user seeing the HTTPS padlock on a site visited using Chrome might assume that any downloads it offers are also secure (HTTP sites offering downloads are already marked ‘not secure’). Read more in

References

How to secure your web content?

In any web application security, apart from user information security like user credentials, personal information and payment details etc. It must very important to take care user content whether it is user specific personalized sensitive content or content which is being shared with third-party services.

Following are the must-read articles to put some level of security in web content:

Properly configuring server MIME types

There are several ways incorrect MIME types can cause potential security problems with your site. This article explains some of those and shows how to configure your server to serve files with the correct MIME types.

HTTP Strict Transport Security

The Strict-Transport-Security:HTTP header lets a website specify that it may only be accessed using HTTPS.

HTTP access control

The Cross-Origin Resource Sharing standard provides a way to specify what content may be loaded from other domains. You can use this to prevent your site from being used improperly; in addition, you can use it to establish resources that other sites are expressly permitted to use.

Content Security Policy

An added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware. Code is executed by the victims and lets the attackers bypass access controls and impersonate users. According to the Open Web Application Security Project, XSS was the seventh most common Web app vulnerability in 2017.

The X-Frame-Options response header

The X-Frame-Options: HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>. Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.

Securing Your Site using Htaccess

It is the best way to secure your site using the .htaccess file. You can blacklist IPs, restrict access to certain areas of website, protect different files, protect against image hotlinking, and a lot more.

Web Security: What Security vulnerability is in Apache Server side include?

Let’s consider a scenario of a web application which has common copyright & footer links and developer wants this footer to be displayed in every page. One solution is to add copyright & footer links in every page manually or use server side include.

What is Server side include?

Server side include is a web server feature that allows developers to dynamically generate web content by using “#” directives without having to do it manually. The server searches for the SSI directives in the HTML code and executes them sequentially. These directives may include shell commands or files or CGI variables that have to be replaced with their value. After executing all the directives, the HTML is finally served at the requestor. This saves a lot of time, makes the code more readable and easy to maintain. The following are some sample directives: 
<! — #include virtual=“/footer.html” →

Server Side Include(SSI) Security vulnerability

Server-Side Include (SSI) injection vulnerabilities arise when an application incorporates user-controllable data into response that is then parsed for Server-Side Include directives. If the data is not strictly validated, an attacker can modify or inject directives to carry out malicious actions.

SSI is a form of attack that can be used by attackers to compromise web applications having SSI directives. Such applications often accept user input and render the same in their pages. This functionality is taken advantage by the attacker who can inject a malicious SSI directive as input. As a result, the hacker can add, alter or delete files on the server, execute shell commands and even gain access to sensitive files like “/etc/passwd”.

Security Remediation

If possible, applications should avoid incorporating user-controllable data into pages that are processed for SSI directives. In almost every situation, there are safer alternative methods of implementing the required functionality. If this is not considered feasible, then the data should be strictly validated. Ideally, a whitelist of specific accepted values should be used. Otherwise, only short alphanumeric strings should be accepted. Input containing any other data, including any conceivable SSI metacharacter, should be rejected.

WebSecurity: Why SSL Cookie needs secure flag?

Overview

In Web application, We generally think of secure HTTP (i.e HTTPS) is good enough but the truth is application security is as important as network & protocols security. Let’s understand why SSL cookie needs secure flag in your application.

SSL Cookie problem without secure flag?

If the secure flag is set on a cookie, then browsers will not submit the cookie in any requests that use an unencrypted HTTP connection, thereby preventing the cookie from being trivially intercepted by an attacker monitoring network traffic. If the secure flag is not set, then the cookie will be transmitted in clear-text if the user visits any HTTP URLs within the cookie’s scope.

An attacker may be able to induce this event by feeding a user suitable links, either directly or via another web site. Even if the domain that issued the cookie does not host any content that is accessed over HTTP, an attacker may be able to use links of the form http://example.com:443/ to perform the same attack.

How hacker can exploit cookie secure problem?

To exploit this vulnerability, an attacker must be suitably positioned to eavesdrop on the victim’s network traffic. This scenario typically occurs when a client communicates with the server over an insecure connection such as public Wi-Fi, or a corporate or home network that is shared with a compromised computer.

Common defenses such as switched networks are not sufficient to prevent this. An attacker situated in the user’s ISP or the application’s hosting infrastructure could also perform this attack. Note that an advanced adversary could potentially target any connection made over the Internet’s core infrastructure.

Solution

The secure flag should be set on all cookies that are used for transmitting sensitive data when accessing content over HTTPS. If cookies are used to transmit session tokens, then areas of the application that are accessed over HTTPS should employ their own session handling mechanism, and the session tokens used should never be transmitted over unencrypted communications.

WebSecurity: Possible Security issues in Robots.txt file

Purpose of Robots.txt File

The file robots.txt is used to give instructions to web robots, such as search engine crawlers, about locations within the web site that robots are allowed, or not allowed, to crawl and index.

Security Important of Robots.txt File

The presence of the robots.txt does not in itself present any kind of security vulnerability. However, it is often used to identify restricted or private areas of a site’s contents. The information in the file may therefore help an attacker to map out the site’s contents, especially if some of the locations identified are not linked from elsewhere in the site.

If the application relies on robots.txt to protect access to these areas, and does not enforce proper access control over them, then this presents a serious vulnerability.

Possible Solution

The robots.txt file is not itself a security threat, and its correct use can represent good practice for non-security reasons. You should not assume that all web robots will honor the file’s instructions. Rather, assume that attackers will pay close attention to any locations identified in the file. Do not rely on robots.txt to provide any kind of protection over unauthorized access.