While constructing the Github Team Pull Request Highlighter, I ran into the need to translate color codes back and forth between hex (
#ff0000) and RGB/RGBA (
rgba(255, 0, 0, 1)).
hexToRgb returns an object with r, g, and b properties.
var color = hexToRgb("#1fbcff"); color.toString(); // rgb(31, 188, 255) color.r; // 31 color.g; // 188 color.b; // 255
hexToRgb can also accept an
alpha value has a second parameter. This allows the output of
var color = hexToRgb("#1fbcff", 0.5); color.toString(); // rgba(31, 188, 255, 0.5) color.r; // 31 color.g; // 188 color.b; // 255 color.alpha; // 0.5
rgbToHex returns a hex string in the format
#ffffff. It can accept a wide variety of inputs.
rgbToHex(31, 188, 255); // #1fbcff rgbToHex("rgb(31, 188, 255)"); // #1fbcff rgbToHex("rgba(31, 188, 255, 0.5)"); // #1fbcff
rgbToHex can also accept the color objects produced by
var color = hexToRgb("#1fbcff"); rgbToHex(color); // #1fbcff var color2 = hexToRgb("#1fbcff", 0.5); rgbToHex(color2); // #1fbcff
The repo for this library is available here (or click here if you want save time and go directly to the library).
A little over a year ago, I posted about a secret project I had been working on: the official iOS and Android app for Get Drunk Not Fat. There have a been a couple big updates since then, but today I have another to show off.
The latest interface update inherits the flat-UI style of iOS 7, along with a more appealing color scheme. Along side that, the performance has been improved, so you should see all around speed ups on your device.
While working with videos for a couple blogs, I’ve run across performance issue while embedding multiple videos on a page. Once you get past 2-3 embedded videos on a page (something that isn’t terribly challenging if that’s a primary focus of the site), the scroll speed slows to a crawl and over-all user performance begins to deteriorate.
Here’s how the page appears with and without the library:
This requires no modification to the existing code (unless you’re adding custom styling to the iframes). All you need is to embed the script:
If you run into any bugs or problems, let me know in the comments or on the github page!
I was inspired by a Tech Talk given by Brad Gignac a couple months ago at Rackspace on custom Web Components, so while looking at a way to cleanly parse out Markdown inside of an HTML document, I turned to custom HTML5 tags. To my dismay, I couldn’t find anybody implementing something like this for markdown.
To fix this issue, I’ve put together a small library to implement a
<markdown> tag. Since these custom elements require a dash to be included, I’ve had to use
The library is available for download on github. If you would like to see an implementation example, read on after the break.
I’ve spent a lot of time in the last couple months learning my way around building Chrome extensions, resulting in a couple useful extensions for day-to-day work. One of the most vital pieces of knowledge was understanding how to modify the DOM when something in the popup of an extension was clicked or modified. This involves a series of somewhat confusing message passes from the popup to the background script to a content script on the page. The following explains that process.
While it’s a relatively minor annoyance, i wanted to find away around the requirement of launching boot2docker, the LXC-emulating layer for OSX, every time I rebooted my machine. The following scripts have been designed to automate this process and eliminate the need to manually start that machine.
To set this system up, run the following command:
curl -sS https://raw.githubusercontent.com/amussey/blog-posts/master/2014/05_OSX-boot2docker-on-Startup/setup.sh | /bin/bash
For a detailed description of what this script is doing, you can check out the write up after the break! (I’d recommend at least glancing over it so you’re not blindly running shell scripts from the internet.)
Just finished putting together the montage of our adventures last weekend at Awesome Con. Check it out! Also, a gif of Superman:
For those spinning up their first virtual server on cloud hosts like Rackspace, they might be confused about the difference between the standard OS images and the PVHVM images. Rackspace has a good document explaining the major differences between the two, and they break it down to a couple main bulletpoints:
Network and disk IO are faster with PVHVM images because QEMU emulation is bypassed.
PVHVM requires a bit more memory overhead than PV. If your application is memory intensive, or if it is optimized for PV Memory Management Units, PV might be a better choice.
If you are striving for performance optimization, test both options to determine which one performs better for your application.
If you want to check to see which type of virtualization your system is using, you can run the following command:
if [ `dmesg | egrep -i 'xen|front' | grep 'HVM' | wc -l` -eq 0 ] ; then echo "PV" ; else echo "PVHVM" ; fi
So, the burning question: does it actually matter?
Tl;dr: for IO, absolutely, but for CPU, not so much.
To try out the difference between PV and PVHVM servers, I spun up two 1GB Performance Cloud Servers running Ubuntu 14.04 and Ubuntu 14.04 PVHVM. The summary of the benchmarks is as follows:
------------------------------------------------------------- Benchmark | PV | PVHVM ------------------------------------------------------------- sysbench (CPU) | 11.9433 ms/response | 11.85 ms/response hdparam (Disk IO) | 8492.9067 MB/s | 9030.32667 MB/s dd (Disk IO) | 467.4 MB/s | 1.0 GB/s -------------------------------------------------------------
sudo apt-get install sysbench
The command I ran for the benchmark was:
sysbench --test=cpu --cpu-max-prime=50000 run
You can view the full, raw results here, but the average of the three runs came out to PV’s average per-request response time being 11.9433ms and total time 119.466s, while PVHVM’s average per-request response time was 11.85ms and total time 118.519933s.
dd were used for hard drive benchmarking. For hdparm, the command used was:
hdparm -Tt /dev/xvda1
The result of the three trials for PV averaged out to:
Timing cached reads: 16750 MB in 1.97 seconds = 8492.9067 MB/sec Timing buffered disk reads: 1200 MB in 3.00 seconds = 399.6333 MB/sec
while the three trials with PVHVM averaged out to:
Timing cached reads: 17969.333 MB in 1.99 seconds = 9030.32667 MB/sec Timing buffered disk reads: 981.333 MB in 3.00 seconds = 326.72 MB/sec
dd, I ran 5 trials with the command:
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
The averages of the results from the two systems were:
PV 83886080 bytes (84 MB) copied, 0.1795364 s, 467.4 MB/s PVHVM 83886080 bytes (84 MB) copied, 0.08190668 s, 1.0 GB/s
There were a couple things I didn’t take into account in these tests, such as memory overhead. These only showed the raw performance of the two systems head-to-head on benchmarks. If you’re trying to make a decision for your personal application, you should perform your own investigations with your own application stack.