Failure to restrict URL access

Photo by visuals on Unsplash

As with many other web application vulnerabilities, this one also aligns with access control rights. Applications use URL restrictions to prevent non-privileged users from accessing privileged data and resources. Every clickable button in a web application directs to a URL. A failure to restrict access vulnerability means that while clicking the button in the application would prevent access, directly using the URL into the browser allows access. When an application fails to restrict URL access, malicious actors can use “forced browsing” for an attack.

For example, a web application might have a URL structure that looks like this:

www.insecurewebapp.com/failure… the attackers know that the last item in that URL is the data type, they can try to take guesses at the URL structure for a specific type of sensitive information.

www.insecurewebapp.com/failure… the application has a failure to restrict URL access vulnerability, plugging that URL directly into the browser gives the attacker access.

Automate Linux Tasks with Tools

Photo by Andrea De Santis on Unsplash

Let’s take a look at two tools that can make life easier for the Linux admins by automating his day to day tasks.

Puppet

Puppet is an open-source tool designed to make automation and reporting much easier for system administrators. It is basically a configuration management software that helps in configuring and maintaining your servers and other systems in your network. Generally, Server administrators spend a lot of time doing the same task again and again daily. They always wanted to automate these tasks, so as to get more time to work on other projects or learn new concepts and scripting languages. Tasks can be automated by writing scripts, but in companies with a larger network, scripts don’t come in handy. This is where Puppet comes to the rescue as with the help of Puppet one can:

  • Let’s you define unique configuration setting for every host on the network
  • Monitor the network continuously for any alterations
  • Helps in creating and managing users effectively
  • Helps you to manage every open-source tool’s configuration settings

Ansible

Ansible is an open-source configuration management and IT enterprise automation software from Red Hat and it comes with a simple programming language enabling system administrators to effectively handle the automation and configuration process easily and effectively. Ansible consists of a controlling machine and the nodes being controlled by the controlling machine. The nodes are controlled over SSH. One of the main features of Ansible is that agents are not deployed to the nodes, but only communication is done through SSH. A low learning curve, consistency, high reliability and security are other features that make Ansible stand ahead of the competition. The only limitation of Ansible is that provisioning of bare metal and a virtual machine is not possible.

Nagios

Nagios, now known as the Nagios Core and it is an open-source automation and monitoring tool to manage all systems in your infrastructure. It also offers to alert services to alert the system administrators when it smells something fishy in your network. With the help of SNMP with Nagios, the system admins may also control and manage printers, routers and switches. Nagios allows us to create an event handler that can restart the faulty application and its services automatically whenever application and its services go down.

Preventing Cross-Site Scripting Attacks

Photo by Pawel Czerwinski on Unsplash

Implementing HTTP security headers are an essential way to keep your site and your visitors safe from attacks and hackers. In a previous post, we dove into how the X-Frame-Options header and frame-ancestors directive can help combat click jacking. In today’s post, we want to go more in-depth with the X-XSS-Protection header, as well as the newer CSP Reflected-XSS directive, and how they can help prevent cross-site scripting (XSS) attacks.

What is X-XSS Protection?

The x-xss-protection the header is designed to enable the cross-site scripting (XSS) filter built into modern web browsers. This is usually enabled by default, but using it will enforce it. It is supported by Internet Explorer 8+, Chrome, and Safari. The recommended configuration is to set this header to the following value, which will enable the XSS protection and instruct the browser to block the response in the event that a malicious script has been inserted from user input, instead of sanitizing.

x-xss-protection: 1; mode=block

Cross-site Scripting (XSS)

Cross-site scripting, also known as XSS, is basically a way to inject code that will perform actions in the user’s browser on behalf of a website. Sometimes this is seen by the user and sometimes it can go totally unnoticed in the background. There are many different types of XSS vulnerabilities, below are two of the most common.

Reflective XSS: These are usually the most common types. Typically these are within HTTP query parameters and are used by server-side scripts to parse and display a page of results for the user.

Persistent XSS: These are when the data from the attacker is actually saved on the server and then displayed to the user, mimicking a normal page.

Other XSS vulnerabilities include DOM-based, stored server, reflected server, stored client, reflected client, and the subset of a client. Below is an example of how an XSS attack works.

X-XSS-Protection Directives

0 value disables the XSS Filter, as seen below.

x-xss-protection:0;

1 value enables the XSS Filter. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.

x-xss-protection:1; mode=block

Enabling X-XSS Protection Header

The x-xss-protection the header is easy to implement and only requires a slight web server configuration change. You might also want to check to make sure you don’t already have the header enabled. Here is a couple of easy ways to quickly check.

  1. Open up the network tab in Chrome DevTools and if your site is using a security header it will show up on the Headers tab. You can see below that even we are using this security header on the KeyCDN blog.
  2. Another quick way to check your security headers is to quickly scan your site with a free tool, securityheaders.io, created by Scott Helme. This gives you a grade based on all of your security headers and you can see what you might be missing.

Enable in Nginx

add_header x-xss-protection "1; mode=block" always;

Enable in Apache

header always set x-xss-protection "1; mode=block"

Enable on IIS

To enable on IIS simply add it to your site’s Web.config file.

<httpProtocol>
    <customHeaders>
        <add name="X-XSS-Protection" value="1; mode=block" />
    </customHeaders>
</httpProtocol>
    .......
</system.webServer>

Reflected-XSS Directive

An important thing to keep in mind is that the X-XSS-Protection header is pretty much being replaced with the new Content Security Policy (CSP) Reflected-XSS directive. The reflected-xss directive instructs a user agent to activate or deactivate any heuristics used to filter or block reflected cross-site scripting attacks. Valid values are allowblock, and filter. This directive is not supported in the <meta> element.

However, it is not supported in all browsers yet, and so it is still recommended to use the x-xss-protection header. However, you could use both the x-xss-protection and reflected-xss together.

Summary

Hopefully, now you understand a little more about what the x-xss-protection HTTP response header does and how it can help prevent cross-site scripting (XSS) attacks. As seen above, this is very easy to implement. We use security headers on our websites and we encourage you to do the same. Together we can make the web a more secure place and help boost the security header usage numbers.

How to Secure Your PostgreSQL Database – 5 Tips

Photo by Caspar Camille Rubin on Unsplash

PostgreSQL may be the world’s most advanced open source database, but its 82 documented security vulnerabilities per the CVE database also make it highly exploitable. The popular object-relational database is considered superior to others regarding out-of-the-box security. However, proper measures are still required to protect web applications and underlying data. The following are 5 common ways to secure your PostgreSQL implementation from cyber attacks.

1. Do Not Use Trust Security.

When using Trust security, PostgreSQL assumes that anyone connected to the server is authorized to access the database with the database username specified (i.e., the DB trusts that they are who they say they are). To lock this down, edit your pg_hba.conf to use a non-trust authentication method like MD5. Additionally, template1 and PostgreSQL default databases should be revoked remote login access.

2. Use Hash-Based Column encryption for values that don’t need to be decrypted

Encryption methods such as AES are two-way—they can be decrypted—while hash-based encryption methods such as MD5 are one-way. For values that only need to be checked for a match, such as passwords, use one-way encryption for an added layer of security if table data is compromised.

3. Use Physical Separation to Isolate Datasets that Need to be Kept Apart

Using pg_hba and RBAC to control access to physically disparate databases ensures that data in two tables cannot be accessed/viewed simultaneously. Of course, this will break SQL joins, so only use in appropriate scenarios that require physical access separation during the life of a login session.

4. Consider Disabling Remote Access to PostgreSQL

This action alone eliminates a host of substantial attack vectors. Again, this can be set in the pg_hba.conf. If remote access to the database is required, SSH to the server housing the database and use a local connection afterward. Alternatively, you can set up tunnel access to PostgreSQL through SSH, effectively giving client machines access to remote databases as if they were local.9. Assign a Distinct Role for Each Application.

5. Use pg_hba.conf to Specify Which Hosts Can Use SSL-Encrypted and Unencrypted Connections

This can be accomplished by adding and removing the appropriate entries in the pg_hba.conf file. Generally speaking, all clients should be forced to connect with SSL by adding the necessary hostel entries. Using this model, all host entries should be removed (aside from localhost).

What are Web Shell Attacks? How to Protect Web Servers

Photo by Glenn Carstens-Peters on Unsplash

What is Web Shell?

A web shell is a malicious script written in popular web application languages – PHP, JSP, or ASP. They are installed on a web server operating system to facilitate remote administration. When weaponized, a web shell could allow threat actors to modify files and even access the root directory of the targeted web server. Both internet-facing and non-internet-facing servers (such as resource hosting servers) could fall victim to web shell attacks. Web shell attacks are a convenient cyber attack tactic because their execution doesn’t require additional programs. A communication channel can be simply achieved through the HTTP protocol in web browsers – this is why it’s so important to prefer HTTPS protocols.

How Do Web Shell Attacks Work?

Cyber attackers first locate servers with exposures vulnerable to web shell attacks through scanning software, such as Shodan.io. Shodan surfaces all internet-connected devices, including web servers and endpoints, that could be attack vectors for hidden web servers. Once a vulnerability is discovered, cyberattackers launch a web shell attack before installing a patch for the exposure. Exploiting vulnerability CVE-2020-5902 is an example of how fast cybercriminals use exposures that facilitate web shell injections. On June 30, 2020, F5 Networks released a patch for its Traffic Management User Interface (TMUI). The vulnerability facilitated Remote Code Execution (RCE) – a type of cyber attack involving the remote injection of malicious codes into a targeted system. After publishing the vulnerability on June 30, on July 4 (just four days later), an exploit code used to abuse the exposure was discovered.

CVE-2020-5902 exploit code – Source: Microsoft.com

The first stage of a server infection is to penetrate the outer layer of its ecosystem. This is usually achieved by pushing corrupted web shells through file upload web pages. After this, a Local File Include (LFI) vulnerability connects the web shell to a selected web application page. There are many other web shell injection strategies, including the detection and compromise of Exposed Admin Interfaces, Cross-Site Scripting (XSS), and SQL injections. After the web shell has been installed, a backdoor is naturally established, giving cybercriminals direct remote access to the compromised web server at any time. The efficiency of back door creation with web shells is why web shell attacks are primarily used as persistence mechanisms – establishing a long-term malicious internal network presence. Because of this, data breaches and ransomware injections rarely immediately follow a web shell attack. Hackers usually establish an access channel for a future attack or reconnaissance mission.

How to Block Web Shell Injections

It’s much easier to address the vulnerabilities that facilitate web shell injection than to intercept attacks. The following suggested controls and security tools should be used to locate and remediate all possible web shell injection points in your IT ecosystem.

1. Stay Updated with the Latest Security Patches

Security vulnerabilities are the most common pathways for web shell attacks. To block these entry points, keep all web applications, Content Management Systems, web server software, and third-party software updated with the latest security patches.

2. Disable Unnecessarily Web Server Functions

If a web shell is injected, its execution could be blocked if the functions that communicate with web server scripts are disabled in php.ini.Such web server functions include:

  • exec ()eval()shell _exec()assert()

3. Modify the Names of Sensitive Directories

To prevent the upload of corrupted image files, the directories that facilitate such uploads should ideally be completely disabled.If such an upload mechanism is necessary, the default names of these sensitive directories should be modified to make them harder to discover. Only privileged users should have permission to access these modifications to mitigate insider threat attacks.In addition to this, specify a filter for the permitted file types that can be uploaded to your web server.

4. Disable All Unnecessary WordPress Plugins

WordPress plugins are common attack vectors because anyone can develop them – even cybercriminals. To secure these vectors, only install plugins from trusted developers and uninstall all unnecessary plugins.

5. Implement a Firewall

A Web Application Firewall (WAF) is designed to prevent web shells and malicious payloads from being injected into an ecosystem by filtering all network traffic. Like antivirus software, keeping your firewall updated with the latest cybersecurity patches is important.

6. Implement File Integrity Monitoring

A file integrity monitoring solution will compare directory updates against the timestamps of clean directory scripts. If a discrepancy is detected, the requested installation on the code directory of the targeted web server will either be blocked or activate a security alert.

7. Monitor Your Attack Surface

An attack surface monitoring solution completes vulnerability scans of the entire attack surface – both internally and throughout the vendor network. This allows security teams to remediate exposure before cyber attackers discover and exploit them.

Best Practices for Sock Puppets

Creating research accounts can be a challenging task, and it often requires a lot of effort and experimentation to get it right. Trial and error is often the key to success in this process. There is no step-by-step process when setting up accounts, but these are some considerations before creating a research account; some points may seem basic but are equally important.

The best approach is to create an account as a regular user. Quick entry of email and password is critical.

  • IP Address: To avoid getting flagged by social media platforms, it’s best not to use a Virtual Private Network (VPN) when creating a sock account. After making the account, signing in from different locations using free Wi-Fi connections (like those available at coffee shops) is essential, as this will show the platform that you are a legitimate user. By using a variety of IP addresses, you’ll be less likely to get flagged.
  • Name: Use fictional details when considering a name for your sock account. Avoid using your real identity. Consider what name would blend in with your target group because you want to make sure your account stands out if you are suggested as a friend.
  • Email address: You have several email provider options (Mail.com, Gmail.com, Yandex.com, Outlook.com). Do not use a previously created email address – always start fresh and create a new email that has not been once used.
  • Phone verification: If you cannot bypass the verification, use a burner phone and SIM card to create accounts.
  • Profile photo: When choosing images to post on social media, it’s best to use generic landscapes like mountains, beaches, etc. It’s important to avoid using someone else’s identity or photos. Stock images can be helpful in some cases, but you should always crop the photo to delete any previously stored data before uploading. Social media platforms have algorithms that can detect the use of stock images, and your account may be flagged if this is seen.
  • Activity: Once your account is created, you must start interacting naturally, such as posting links, liking pages, etc. The main objective is to mimic how a natural person would use a new account and convince the platform that you are a natural person.
  • Setting/Privacy settings: Immediately review and set the privacy settings for the platform and choose the most secure privacy settings that will allow people to see as little information as possible.

What is Attack Surface Management?

Photo by Possessed Photography on Unsplash

Attack surface management (ASM) is the continuous discovery, inventory, classification, prioritization, and security monitoring of external digital assets that contain, transmit, or process sensitive data. In short, it is everything outside of the firewall that attackers can and will discover as they research the threat landscape for vulnerable organizations. In 2018, Gartner urged security leaders to start reducing, monitoring and managing their attack surface as part of a holistic cybersecurity risk management program.

Today, attack surface management is a top priority for CIOs, CTOs, CISOs, and security teams. What is an Attack Surface?

Your attack surface is all the hardware, software, SaaS, and cloud assets accessible from the Internet that process or store your data. Think of it as the total number of attack vectors cybercriminals could use to manipulate a network or system to extract data. Your attack surface includes:

Known assets: Inventoried and managed assets such as your corporate website, servers, and the dependencies running on them.

Unknown assets: Shadow IT or orphaned IT infrastructure that stood outside the purview of your security teams, such as forgotten development websites or marketing sites.

Rogue assets: Malicious infrastructure spun up by threat actors such as malware, typosquatting domains, or a website or mobile app that impersonates your domain.

Vendors: Your attack surface doesn’t stop with your organization; third-party and fourth-party vendors introduce significant and fourth-party risks. Even small vendors can lead to large data breaches; look at the HVAC vendor that eventually led to Target’s exposure of credit card and personal data on more than 110 million consumers.

Millions of these assets appear on the Internet daily and are outside the scope of firewall and endpoint protection services. Other names include external attack surface and digital attack surface.

Authentication vs. Authorization: When To Use Which One

Photo by Markus Spiske on Unsplash

What’s the difference between authentication and authorization? Does it matter which you use — or do you need both? Are they the secure app’s chicken and egg problem? Let’s dive in.

What is authentication vs. authorization?

Generally, programmers have subconscious reasoning for applying security and access controls to their applications. If questioned, they’ll describe a simple set of AUTH codes that protect the app. Are you using authentication, authorization, or both? What’s the difference? Do these get mixed up because they both begin with “AUTH” and are sometimes referred to in the same way as short-hand? Maybe some of this mix comes from each seeming to allow you to address the other’s concerns if you stretch it a bit.

But authentication and authorization are two separate things. Even if they go hand-in-hand for many applications, it’s essential to know the difference. Once you fully grasp this, it’s clearer how to implement each securely and effectively.

Authentication vs. authorization example

I will use the application and actor of the terms to discuss authentication and authorization. My background is in web applications, so I think of people using my web application as visitors. But actor refers to any human or computer accessing your application in any manner. The term application means any piece of software. So we could be referring to a website visitor, a desktop application user, or a consumer querying your API.

Authentication refers to identifying the actor using your application. That’s it. All authentication is responsible for is saying, “I can identify this actor” or “this actor is unidentified.” The meaning and description of authentication are simple — that doesn’t mean the implementation is. For example, you might have a username and password, tokens, cookies, JWTs, or other ways to authenticate a user. But, after the process is complete, we either can identify the actor or we can’t.

Authorization refers to the permission to accomplish a task in your application. Tasks can be simple, like listing resources or seeing the details of an object. Advanced ones may create, update, or move data through a complex business workflow. Now, this is important: authorization does not require an authenticated actor. It just so happens that most authorization decisions are based on the specific permissions of an identified actor, but that isn’t a requirement. We’ll touch on this more later. But authorization is the mechanism that answers the question: “does this actor have permission to do this action.” Both authentication and authorization are necessary.

Let me share a real-world example.

A client had requested that their web application list the resources for any visitor. Multiple pages had this listing shown and an API. They then asked that if visitors click through to the detail page or request details over the API, the visitor must be authenticated.

Now, check for authenticated users at the top of the code that retrieves details. The other code needs no checks because it doesn’t matter if they’re logged in. (That is to say, whether the user is authenticated or not, either situation allows access.) This application has only two states: the user is authenticated or not.

Only two states are rarely the case. Furthermore, there’s hardly ever a time that an application matures without expanding its access controls beyond these two states. So, with this knowledge in place, I decided to implement authorization on all endpoints.

They implemented an authorization layer that simply returned valid on the endpoints that displayed lists of resources (including the API). A simple class reused on each endpoint always authorized access to the list of resources. Then, on the endpoint with the details, I created another class that returned true only when I had the visitor’s identity.

Then it happened: business requirements changed! About a week after the web page was launched, the business decided they wanted the lists of resources to be locked behind authentication. Guess what? Because I had used authentication correctly, to begin with, it was super simple. Next, I modified the authorization class to list resources to check for an identified visitor. Done! I imagine a scenario where lists require an authenticated user, but details need a user who has a subscription. If that happens, I’m ready! I’m prepared for future changes in the business, too.

What is the point?

Authentication determines an identity of an actor. Authorization decides if an actor has permission to take action. Your application, no matter how simple, should use both hand-in-hand. Authenticate actors when you must. Authorize actors, regardless of their identity, for every action.

What are Sock Puppets in OSINT

Sock puppets, or research accounts, are fictitious online identities that conceal an OSINT investigator’s true identity. They are created to gain access to information that requires an account to access. However, it is essential to note that creating fake accounts goes against the Terms of Service of some websites. Therefore, the users are responsible for reading and understanding the Terms of Service of their websites. Although creating sock puppets is not usually illegal, it is equally important to check with your organization’s policies to ensure you have permission to create and use them.

Purpose of Sock Puppets

Sock puppets are created to keep OSINT research separate from personal life. This ensures that OSINT investigators maintain anonymity and practice good Operational Security (OPSEC). It is crucial to emphasize the importance of separating an OSINT investigator’s real identity from their research accounts.

Some social media platforms, such as Facebook, may expose your identity to a target being investigated through friend recommendations. Additionally, if you use your account to conduct online research, you may accidentally like a post or send a friend request to your target. To avoid these risks, it would help if you created sock puppets before starting your research. To put it in perspective, imagine yourself as a police officer conducting surveillance using your vehicle, which would reveal your identity. You would not do that, right? Similarly, using your personal social media accounts to research a subject could be better because it can expose your real identity.

What are the Sock Puppet Functions?

When you are passive, it means that you do not interact with a particular target. However, your profile might still show up in the “suggested friends” or “people to follow” results, so it is advisable to try blending in a little. One way to do this is by choosing a name that fits well with your target group.

Engaging with your target in some way, such as by adding them as friends on Facebook, is essential to conducting active research. Blending in with the target group during active research is even more crucial. If you plan on engaging with your target, creating a few accounts on different platforms is recommended to make it appear like you are a natural person.

What is OpenID Connect?

OpenID Connect extends the OAuth protocol to provide a dedicated identity and authentication layer that sits on top of the basic OAuth implementation. It adds some simple functionality that enables better support for the authentication use case of OAuth.

OAuth was not initially designed with authentication in mind; it was intended to be a means of delegating authorizations for specific resources between applications. However, many websites began customizing OAuth for use as an authentication mechanism. To achieve this, they typically requested read access to some basic user data and, if they were granted this access, assumed that the user authenticated themselves on the side of the OAuth provider.

These plain OAuth authentication mechanisms were far from ideal. For a start, the client application had no way of knowing when, where, or how the user was authenticated. As each of these implementations was a custom workaround of sorts, there was also no standard way of requesting user data for this purpose. To support OAuth properly, client applications would have to configure separate OAuth mechanisms for each provider, each with different endpoints, unique sets of scopes, and so on.

OpenID Connect solves a lot of these problems by adding standardized, identity-related features to make authentication via OAuth work in a more reliable and uniform way.