Slow HTTP Dos Verify and mitigate

What it is
https://en.wikipedia.org/wiki/Slowloris_(computer_security)

Tools to check
Slowloris : https://github.com/gkbrk/slowloris
SlowHttpTest : https://github.com/shekyan/slowhttptest

Mitigatation

mod_qos
apt-get update && apt-get -y install libapache2-mod-qos && a2enmod qos && /etc/init.d/apache2 restart
* configuration file in /etc/apache2/mods-enabled/

mod_reqtimeout
a2enmod reqtimeout && /etc/init.d/apache2 restart
* configuration file in /etc/apache2/mods-enabled/

Securing apache

Some notes on securing apache..

A few key points

  • Disable access to .
  • Disable banner
  • disabled PHP functions

Disable access to .
https://stackoverflow.com/questions/4352737/apache-configuration-regex-to-disable-access-to-files-directories-beginning-wit


Order allow,deny
Deny from all


Order allow,deny
Deny from all


Order allow,deny
Deny from all

Disable banner
http://www.ducea.com/2006/06/15/apache-tips-tricks-hide-apache-software-version/

Usually found in /etc/apache/conf-enabled/security.conf

Disable PHP functions
https://www.cyberciti.biz/faq/linux-unix-apache-lighttpd-phpini-disable-functions/

disable_functions =exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source

Easiest way to install ffmpeg on mac os

Just keeping reference.

https://www.oodlestechnologies.com/blogs/Easiest-Way-To-Install-FFmpeg-On-Mac-OS-X
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

https://gist.github.com/clayton/6196167
brew install ffmpeg --with-vpx --with-vorbis --with-libvorbis --with-vpx --with-vorbis --with-theora --with-libogg --with-libvorbis --with-gpl --with-version3 --with-nonfree --with-postproc --with-libaacplus --with-libass --with-libcelt --with-libfaac --with-libfdk-aac --with-libfreetype --with-libmp3lame --with-libopencore-amrnb --with-libopencore-amrwb --with-libopenjpeg --with-openssl --with-libopus --with-libschroedinger --with-libspeex --with-libtheora --with-libvo-aacenc --with-libvorbis --with-libvpx --with-libx264 --with-libxvid

Installing cuckoo sandbox on Mac OS

Cuckoo sandbox is an automated malware analysis system. Its does utilize virtualization engine to isolate malware execution and analysis. You will be given a web-interface, as well as CLI tools to communicate with cuckoo, such to upload, and reviewing all reports.

I’ve found a complete tutorial on this.

http://advancedmalwareprotection.blogspot.com/2012/03/installing-cuckoo-on-max-os-x-lion.html

Hope this would help anyone to have this. The tutorial is a bit technical. If you’re not familiar with Mac system or Terminal, you might want just to use any public sandbox.

Smokeping: Ping latency grapher to the rescue

Smokeping is a great tool, written by Tobias Oetiker, which is the same guy who wrote MRTG, the world most popular graphing tool. Smokeping can gives you another dimension in viewing your network environment, that is ping latency, and graph nicely with nice colours.

Installation were easy, you can install it using apt-get in Debian-based distro. The configurations were done using the config file, which is usually located in /etc/smokeping/. The default configuration files includes some samples, which allow you to follow the syntax to add additional hosts to monitor.

The great thing about Smokeping, it tells you link quality metric, which usually being done manually with ping. The ping data will be graph as the sample graph here, which could tell you the ping latency (time it took for ping until response), and also dropped packet.

smokeping

As the example above, we can see that during the day, the link quality dropped to almost 95% during the day, and latency increases as well. Even though you have a high speed link, this might tell a different story, which might caused by dropped packets at network devices during transit, probably it could not handle the huge traffic.

Nuffnang serving malicous links?

It has been known that advertising network has been used as a medium for malware writers to reach out to massive users base easily. They can simply inject the malicious URL into the system, making any website serving the ads will be serving the malicious links as well.

Last a few weeks, I had my website blocked by google safebrowsing API. The full-screen alert in red background, which scares my visitors away.

I quickly suspects my website have been compromised, followed by frequent attempts to exploit vulnerability and brute force attacks over the CMS used. The check goes from filesystem level for any backdoors, rough entry in database, rough entry of CMS options, but I could not find anything. Futher check at sitecheck.sucuri.com also does not reveal anything.

Later, wepawet revealed something. Wepawet scanned my page, and shows any redirection occurred on the page, and any remote reference to other sites, and Nuffnang ads appears redirecting my users to a malicious URL.

farhanfaisal_wepawet

I’ve contacted Nuffnang regarding this, and they don’t want to accept this at first. As for immediate action, I’ve removed Nuffnang Ads from my page. I didn’t receive any earning from Nuffnang ads anyway, while I’ve made more than a few hundred dollars from this site through other advertising campaign.

wp-login.php brute force

Lately my server is receiving many request for wordpress brute-force attempts. Some of them do slow us down. The server resources were just wasted for the request. So, some searches got me to this site, which provide a good mod_security config to block this attack for a short period.

http://www.frameloss.org/2011/07/29/stopping-brute-force-logins-against-wordpress/

This is just a snippet, if you’re wondering. The hits can come at more than 20 request at one time. These are the code, mod_security configurations. I assume you already have mod_security running.


# This has to be global, cannot exist within a directory or location clause . . .
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},initcol:user=%{REMOTE_ADDR}

# Setup brute force detection.
# React if block flag has been set.
SecRule user:bf_block "@gt 0" "deny,status:401,log,msg:'ip address blocked for 5 minutes, more than 15 login attempts in 3 minutes.'"
# Setup Tracking. On a successful login, a 302 redirect is performed, a 200 indicates login failed.
SecRule RESPONSE_STATUS "^302" "phase:5,t:none,nolog,pass,setvar:ip.bf_counter=0"
SecRule RESPONSE_STATUS "^200" "phase:5,chain,t:none,nolog,pass,setvar:ip.bf_counter=+1,deprecatevar:ip.bf_counter=1/180"
SecRule ip:bf_counter "@gt 15" "t:none,setvar:user.bf_block=1,expirevar:user.bf_block=300,setvar:ip.bf_counter=0"

flow duplicator

samplicator

Nowadays I able to play around with flow data. Flow provide detail information on network traffic, for various purpose such as network monitoring, bandwidth monitor, traffic accounting archive and for security purpose.

We have configured a layer 2/3 switch to send sflow data to my monitoring server. Some visualization were done by nfsen, but, its quite limited. Its just a flow browser. Some other nfsen plugins might help, such as cndet and nfsight, which might give additional summary and analysis on your network traffic. Therefore, we have another flow analyzer, like OpManager Flow Analyzer, by ManageEngine.

For this, I still want the data to be kept in that machine, and send a duplicate of sflow data to OpManager. I’ve found samplicator, written by Simon Leinen at SWITCH. It works well, duplicating the data to another machine, and the tool able to keep the original source IP. As far as the OpManager view, the data came from the switch.

Samplicator – http://code.google.com/p/samplicator/
http://www.plixer.com/blog/netflow/free-netflow-forwarder-or-netflow-duplicator
http://www.bradreese.com/blog/plixer-5-21-2010.htm

mod_fcgid: HTTP request length xxxxx (so far) exceeds MaxRequestLen (131072)

This error message appears today, and it seems related to fcgi.

As defined here, http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgidmaxrequestlen, the default value for MaxRequestLen is 131072, which is quite low for most implementation. I would prefer to use a higher value for this.

For cpanel users, the configurations lies in this file.
/usr/local/apache/conf/php.conf

You need to add an additional line for this, which set the limit to 2MB.
MaxRequestLen 2097152

Restart your apache, and you’re done.

Slow log – monitoring application’s health through slow log

The concept lies on logging slow performing application or scripts, to know which task are the most difficult to be completed on time. From there, you will know which task taking too much time and most probably resources, and need to be optimize to make sure it doesn’t affect the other tasks to be completed in timely manner

Mysql implemented slow log, which will log queries taking too much time to be completed. Mysql do have features where you can set global configuration on the fly, and will be in effect until the next mysql restart. If you want to make it permanent, it need to be placed in my.cnf configuration file.

To make enable slow log:

SET GLOBAL slow_query_log := 1;
SET GLOBAL long_query_time=10;
SET GLOBAL LOG_QUERIES_NOT_USING_INDEXES = ON;
SET GLOBAL slow_query_log_file = ‘path’;

slow_query_log (1 = enable, 0 = disable)
long_query_time (the treshhold to consider it is slow, in seconds)
LOG_QUERIES_NOT_USING_INDEXES (if turned ON, it will consider queries not using indexes as slow)
slow_query_log_file (the path to the log file)

If you want to make the settings permanent in my.cnf file:
log_slow_queries=”path”
long_query_time=10
log-queries-not-using-indexes =1

For more information, please refer to official documentation on slow query log from mysql.com.

Besides mysql, your web server is a crucial part in processing your web-application as well, and for that, you might want to consider using this apache module, modlogslow, to log scripts which takes a long time to be completed.

It is an apache module, you need to compile it using apxs. Add some configurations to load the module file, and configure how will it behave, and you’re ready to go. The documentation provided in the archive, and also in the Google Code page, enough to get it running.

the code below will be enough to enable the module. Find httpd.conf, and put these few lines to enable it
LoadModule log_slow_module modules/mod_log_slow.so
LogSlowEnabled On
LogSlowLongRequestTime 8000
LogSlowFileName /usr/local/apache/logs/slow_log

Hope this would help you in monitoring your server, get to know who is sick and need attention.