Automate PNG & JPG Image Optimization

0

Download Source Code

Introduction

If you are a web developer you already know how important to reduce the image size by compressing the image. When you checking the page speed using the tool like “Google PageSpeed Insight” or “Yahoo YSlow”, you can see how much bytes we can save by compressing the image.

Google:Page Speed

Images saved from programs like Fireworks can contain kilobytes of extra comments, and use too many colors, even though a reduction in the color palette may not perceptibly reduce image quality. Improperly optimized images can take up more space than they need to; for users on slow connections, it is especially important to keep image sizes to a minimum.

You should perform both basic and advanced optimization on all images. Basic optimization includes cropping unnecessary space, reducing color depth to the lowest acceptable level, removing image comments, and saving the image to an appropriate format. You can perform basic optimization with any image editing program, such as GIMP.  Advanced optimization involves further (lossless) compression of JPEG and PNG files. You should see a benefit for any image file that can be reduced by 25 bytes or more (less than this will not result in any appreciable performance gain). <Google Inc., 2012. Optimize images  https://developers.google.com/speed/docs/best-practices/payload#CompressImages >

There are some online tools like “Yahoo Smush.it” use lossless compression techniques and reduce file size by removing the unnecessary bytes form the image. But if you want to make it automate, how do you do that? Several standalone tools are available that perform lossless compression on JPEG and PNG files.

For JPG Google recommended using,

  • Jpegtran – available for both windows and Linux and mac
  • Jpegoptim – available only on Linux

For PNG Google recommended using,

Using the code

Here I wrote a windows batch file which recursively search the given folder and optimized the JPEG and PNG files.

  1. Download jpegtran, OptPNG and PNGOUT executable files. (Or download attached zip file all the necessary files already included)
  2. Create a folder “ImageOptimization” in your C:\ Drive. (You can change those name and folder location by editing the batch file content) and put above downloaded utility files there.
  3. Create a batch file “optimize.bat” within the folder and copy following code into it
    @echo none
    REM Optimizing JPEG with jpegtran
    forfiles /p %1 /s /m "*.jpg" /c "cmd /c  echo processing @path && D:\ImageOptimization\jpegtran.exe -optimize -progressive -copy none -outfile @path @path"
    REM Optimizing PNG with pngout
    forfiles /p %1 /s /m "*.png" /c "cmd /c  echo processing @path && D:\ImageOptimization\pngout.exe @path"
    REM Optimizing PNG with optipng
    rem forfiles /p %1 /s /m "*.png" /c "cmd /c  echo processing @path && D:\ImageOptimization\optipng.exe -force -o7 @path"
    pause
    

    Although I included both PNGOUT and OPTPNG in the script you no need to use both.

  4. Finally you can execute the bat file by passing the image folder you wish to optimize
    optimize.bat “D:\image”

How it works

  • forfiles command – Select a file (or set of files) and execute a command on each file (Batch processing) Refer: http://ss64.com/nt/forfiles.html
  • %1 – accept the folder as a parameter. In above example this will equal to “D:\Image”

forfiles command find all the images in the given directory (recursively) and execute the optimizing executable by passing the image as a parameter to them (@path).

Try

  1. You can further improve by adding this batch command as a context menu command
    http://msdn.microsoft.com/en-us/library/windows/desktop/cc144169(v=vs.85).aspx
  2. Or even you can use a scheduler (e.g. Windows scheduler) to find the daily updated file and optimize them by slightly modifying the forfiles command with “/d” option.

Counting Consecutive Dates Using SQL

0

Download demo query

Recently I have answered one of the SQL based question in the CodeProject. Thanks for the participant that question wake up me to write this blog post.

Question:

He has a table (tblLeave) with the data like below.

PAYCODE		LV_TYPE	FROM_DATE		TO_DATE	        	LVALUE
5023		SL    	14/12/2012 0:00		14/12/2012 0:00		1
5023		SL    	15/12/2012 0:00		15/12/2012 0:00		1
5023		COF   	16/12/2012 0:00		16/12/2012 0:00		1
5023		SL    	19/12/2012 0:00		19/12/2012 0:00		1
5023		SL    	22/12/2012 0:00		22/12/2012 0:00		1
5023		SL    	23/12/2012 0:00		23/12/2012 0:00		1
5023		SL    	24/12/2012 0:00		24/12/2012 0:00		1
5023		PL    	28/12/2012 0:00		28/12/2012 0:00		1
5023		PL    	29/12/2012 0:00		29/12/2012 0:00		1
5023		PL    	30/12/2012 0:00		30/12/2012 0:00		1
5023		PL    	31/12/2012 0:00		31/12/2012 0:00		1

And he wants to output the data as below

PAYCODE LV_TYPE FROM_DATE       TO_DATE         LVALUE
5023    SL      14/12/2012 0:00 15/12/2012 0:00 2
5023    COF     16/12/2012 0:00 16/12/2012 0:00 1
5023    SL      19/12/2012 0:00 19/12/2012 0:00 1
5023    SL      22/12/2012 0:00 24/12/2012 0:00 3
5023    PL      28/12/2012 0:00 31/12/2012 0:00 4

Condition: If same type of leave taken continuously, it should be merged in one row mentioning from_date to to_date.

Answer:

When I saw that question first time, I didn’t go through in details and just thought that was an easy grouping query. And I just gave following answer.

SELECT LV_TYPE,LV_TYPE,MIN(FROM_DATE) AS FROM_DATE ,MAX(TO_DATE) AS TO_DATE, COUNT(LVALUEP) AS LVALUE
FROM tblLeave
GROUP BY PAYCODE,LV_TYPE

But that’s wrong, he made comments saying it doesn’t make sense and highlighted the condition he wants. (Thanks for him; he didn’t down vote my answer). Again I read the question… Oh… That was a tricky question. He needs to group the leave by consecutive date. Isn’t that tricky?

To answer that, I use the DATEDIFF SQL function

DATEDIFF ( datepart , startdate , enddate )

http://msdn.microsoft.com/en-us/library/ms189794.aspx

Following is my answer and the output

SELECT PAYCODE,LV_TYPE, MIN(FROM_DATE) AS FROM_DATE,
       MAX(FROM_DATE) AS TO_DATE, COUNT('A') AS LVALUE
FROM (
SELECT PAYCODE,LV_TYPE,FROM_DATE,
    DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY FROM_DATE), FROM_DATE) AS Diff
FROM tblLeave) AS dt
GROUP BY PAYCODE,LV_TYPE, Diff
ORDER BY FROM_DATE
PAYCODE     LV_TYPE FROM_DATE               TO_DATE                 LVALUE
----------- ------- ----------------------- ----------------------- -----------
5023        SL      2012-12-14 00:00:00.000 2012-12-15 00:00:00.000 2
5023        COF     2012-12-16 00:00:00.000 2012-12-16 00:00:00.000 1
5023        SL      2012-12-19 00:00:00.000 2012-12-19 00:00:00.000 1
5023        SL      2012-12-22 00:00:00.000 2012-12-24 00:00:00.000 3
5023        PL      2012-12-28 00:00:00.000 2012-12-31 00:00:00.000 4

Query Explanation:

Before explain the logic see the following query and the output.

SELECT PAYCODE,LV_TYPE,FROM_DATE,
	ROW_NUMBER() OVER(ORDER BY FROM_DATE) AS ROW_NUMBER,
    DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY FROM_DATE), FROM_DATE) AS Diff
FROM tblLeave
PAYCODE     LV_TYPE FROM_DATE               ROW_NUMBER           Diff
----------- ------- ----------------------- -------------------- -----------
5023        SL      2012-12-14 00:00:00.000 1                    41254
5023        SL      2012-12-15 00:00:00.000 2                    41254
5023        COF     2012-12-16 00:00:00.000 3                    41254
5023        SL      2012-12-19 00:00:00.000 4                    41256
5023        SL      2012-12-22 00:00:00.000 5                    41258
5023        SL      2012-12-23 00:00:00.000 6                    41258
5023        SL      2012-12-24 00:00:00.000 7                    41258
5023        PL      2012-12-28 00:00:00.000 8                    41261
5023        PL      2012-12-29 00:00:00.000 9                    41261
5023        PL      2012-12-30 00:00:00.000 10                   41261
5023        PL      2012-12-31 00:00:00.000 11                   41261

By seen this, you will realize that above query generate the same Diff  value for all the consecutive dates. Now you can easily group this and get the counts as you like.

Configure Entity Tags (ETags)

1

If you have ever used the YSlow (http://developer.yahoo.com/yslow/) to analyze the web page to improve the performance, you may have experienced with “Configure ETags” alert.

YSlow Analyse

What is Entity Tags (ETags)

Entity tags (ETags) are a mechanism web servers and the browser use to determine whether a component in the browser’s cache matches one on the origin server. Since ETags are typically constructed using attributes that make them unique to a specific server hosting a site, the tags will not match when a browser gets the original component from one server and later tries to validate that component on a different server. (http://developer.yahoo.com/performance/rules.html#etags)

“Configure ETags” is a recommended best practices for speeding up the web site. If you are an ASP.NET developer, you can configure this by adding following code to the web.config file.

<system.webServer>
  <httpProtocol>
    <customHeaders>
      <clear/>
      <add name="ETag" value=" "/>
    </customHeaders>
  </httpProtocol>
</system.webServer>

How to validate a credit card number?

2

All you know what information contains in your NIC number. But do you know what information contains in the Credit Card Number? Here are some useful details.

Card Length

Typically, credit card numbers are all numeric and the length of the credit card number is between 12 digits to 19 digits.

  • 14, 15, 16 digits – Diners Club
  • 15 digits – American Express
  • 13, 16 digits – Visa
  • 16 digits – MasterCard

For more information refer: http://en.wikipedia.org/wiki/Bank_card_number

Containing Information

Sample Credit Card

1 – Major Industry Identifier (MII)

The first digit of the credit card number is the Major Industry Identifier (MII). It designates the category of the entry which issued the card.

  • 1 and 2 – Airlines
  • 3 – Travel
  • 4 and 5 – Banking and Financial
  • 6 – Merchandising and Banking/Financial
  • 7 – Petroleum
  • 8 – Healthcare, Telecommunications
  • 9 – National Assignment

2 – Issuer Identification Number

The first 6 digits are the Issuer Identification Number. It will identify the institution that issued the card. Following are some of the major IINs.

  • Amex – 34xxxx, 37xxxx
  • Visa – 4xxxxxx
  • MasterCard – 51xxxx – 55xxxx
  • Discover – 6011xx, 644xxx, 65xxxx

3 – Account Number

Taking away the 6 identifier digits and the last digits, remaining digits are the person’s account number (7th and following excluding last digits)

4 – Check digits

Last digit is known as check digits or checksum. It is used to validate the credit card number using Luhn algorithm (Mod 10 algorithm).

For more information please refer.
http://en.wikipedia.org/wiki/Bank_card_number
http://en.wikipedia.org/wiki/List_of_Issuer_Identification_Numbers

Luhn algorithm (Mod 10)

The Luhn algorithm or Luhn formula, also known as the “modulus 10″ or “mod 10″ algorithm, is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers, IMEI numbers, National Provider Identifier numbers in US and Canadian Social Insurance Numbers. It was created by IBM scientist Hans Peter Luhn. (http://en.wikipedia.org/wiki/Luhn_algorithm)

When you implementing the ecommerce application, it is good practice to validate credit card number before send it to the bank validation. This saves a lot of time and money by avoiding a trip to the bank.

Here are the Luhn steps which can used to validate the credit card number.

4 0 1 2 8 8 8 8 8 8 8 8 1 8 8 1

1. Starting with the check digit double the value of every other digit (right to left every 2nd digit)

Mod 10 Step 1

2. If doubling of a number results in a two digits number, add up the digits to get a single digit number. This will results in eight single digit numbers

Mod 10 Step 2

3. Now add the un-doubled digits to the odd places

Mod 10 Step 3

4. Add up all the digits in this number

Mod 10 Step 4

If the final sum is divisible by 10, then the credit card number is valid. If it is not divisible by 10, the number is invalid.

Here is the code sample that I used to do the mod10 validation

public static bool Mod10Check(string creditCardNumber)
{
 //// check whether input string is null or empty
 if (string.IsNullOrEmpty(creditCardNumber))
 {
 return false;
 }

//// 1. Starting with the check digit double the value of every other digit
 //// 2. If doubling of a number results in a two digits number, add up the digits to get a single digit number. This will results in eight single digit numbers
 //// 3. Get the sum of the digits
 int sumOfDigits = creditCardNumber.Where((e) => e >= '0' && e <= '9')
 .Reverse()
 .Select((e, i) => ((int)e - 48) * (i % 2 == 0 ? 1 : 2))
 .Sum((e) => e / 10 + e % 10);
 //// If the final sum is divisible by 10, then the credit card number is valid. If it is not divisible by 10, the number is invalid.
 return sumOfDigits % 10 == 0;
}

The original article was modified according to the comments made by Code Project super users.

Download Demo Project

Exclude crystal report embedding when building the ASP.Net web site

0

If you have worked with ASP.Net web applications which include number of crystal reports as part of that, you might have experienced it takes a long time to build the web site. This happens, because by default crystal reports are set to be embedded as a resource.In default your “web.config” displays as follows.

Web config default configuration for crystal report

To resolve this, you can simply modify the embedRptInResource=”false” as follows.

<businessObjects>
    <crystalReports>
        <rptBuildProvider>
            <add embedRptInResource="false"/>
        </rptBuildProvider>
    </crystalReports>
</businessObjects>

(413) Request Entity Too Large

2

Recently I worked with a WCF web service which is hosted in IIS7, and I used one of the service methods to send a byte array which contains a picture. This works well with small size images, but when I’m trying to upload a larger picture, the WCF service returns an error: (413) Request Entity Too Large. Same error I got a month ago when I was developing an ASP.Net web application which is hosted on IIS 7 over SSL. In that case, there was no file upload on the page. It occurred when I’m accessing the web pages which are having a grid view control with large number of paging. The same pages worked fine on HTTP but not on HTTPS.
In both scenarios, I googled and found out different solutions.

1. uploadReadAheadSize
In the second scenario, the error occurred because of the size of the page, it is very large and it caused to request entry body become larger when you submitting the page.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/7e0d74d3-ca01-4d36-8ac7-6b2ca03fd383.mspx?mfr=true

Basically, what happens is if you have a website with SSL and “Accept Client Certificates” enabled HTTP requests are limited to the UploadReadAheadSize of the site. To resolve this, you have to increase the UploadReadAheadSize. (Default size 48kb)

appcmd.exe set config -section:system.webserver/serverruntime /uploadreadaheadsize: 1048576 /commit:apphost

2. maxReceivedMessageSize
WCF by default limits messages to 64KB to avoid DOS attack with large message. By default, it sends byte[] as base64 encoded string and it increases the size of the message (33% increase in size). There for if the uploaded file size is ~larger than 48KB then it raises the above error. (48KB * 1.33 = ~64KB) (NB. you can use MTOM – Message Transmission Optimization Mechanize to optimize the message)

By modifying the “maxReceivedMessageSize” in the Web.config file to accept large messages, you can solve this issue.

<system.serviceModel>
  <bindings>
    <basicHttpBinding>
      <binding maxReceivedMessageSize="10485760">
        <readerQuotas ... />
      </binding>
    </basicHttpBinding>
  </bindings>  
</system.serviceModel>