This filter allows you to convert an image to greyscale in a color-aware fashion, preserving the contrast between different colors. To apply the filter to your own images, click here.

The algorithm used is a simplification of the one described in this paper. The basic idea is to try and preserve the visual difference between as many pairs of colors in the image as possible during the greyscale conversion. So if you have two colors, for example a bright red and a bright green, that both end up looking the same shade of grey, one or both of them are darkened or lightened so that you can tell them apart. For the color image, the difference between two colors is measured as the distance between them in CIELab colorspace, as a fraction of the maximum difference between any two colors in the image. For the greyscale image, the difference is measured as the difference in brightness between the two shades of grey, as a fraction of the total range from black to white. reHue.net implements the algorithm like this:

• The image is quantized to 128 colors and the number of pixels of each color is recorded. The JFIF quantization algorithm is used (as described here) instead of the median split method. To speed things up, a random sample of up to 65536 pixels is used for the quantization, instead of the whole image.
• The quantized colors are converted to CIELab colorspace, and the distance between each pair of colors in CIELab space is computed. There are (128*127)/2 = 8128 pairs in total. The CIE76 color difference metric is used instead of CIE94 because it is simpler and faster, even if it is a little less accurate.
• An optimal vector for conversion from color to greyscale in CIELab space is computed for the image. The vector is a set of numbers to multiply with each of the CIELab color components and then sum them to get the greyscale value. Standard CIELab greyscale conversion just uses the brightness value and discards the a and b color components, so the vector for that would be (1, 0, 0). Changing the zeros to positive or negative values allows us to lighten or darken greens versus reds or blues versus yellows. To find the optimal vector for the image, the algorithm evaluates a function for each possible vector it considers, that goes through each pair of colors in the image and compares the original CIELab distance between them with the distance after greyscale conversion. Each pair is weighted by the number of pixels in the image using those colors. The optimum vector is the one that minimizes the total difference between the color distances before and after conversion, summed across all the color pairs. To find the best greyscale conversion vector, instead of using Fletcher-Reeves optimization, the algorithm does a grid search over a range of plausible conversion vectors (ones where the brightness component is the main factor in the conversion) and then starting from the vector which produces the lowest difference sum, uses a steepest descent search to find a local minimum.
• Using the greyscale conversion vector, the image is converted to greyscale. Some clamping and attenuation is applied to stop overwhitening. A gamma value of 2.2 is used.

The results are somewhat similar to what can be achieved using the Black and White Adjustment Layer in Photoshop CS3, but it is a completely automatic process. The downside is that the results can sometimes be aesthetically unappealing compared to a hand-tuned solution.

A related filter is the Brightness-adjusted Daltonized Image filter, which adjusts the brightness of different colors within an image according to this algorithm, but does not convert the image to greyscale.

In the images below, compare the difference in contrast between the greens and reds in the gamma-corrected greyscale image with those in the color-adjusted greyscale image:

Original Image: Gamma-corrected Greyscale Image filter applied:  