My new job (Software Development Manager @ Amazon.com) has basically taken all of my free time so I apologize to everyone for not posting anything new or responding to comments.
I’m hoping it slows down in a few months so I can start posing again.
My new job (Software Development Manager @ Amazon.com) has basically taken all of my free time so I apologize to everyone for not posting anything new or responding to comments.
I’m hoping it slows down in a few months so I can start posing again.
The last few months of work have been crunch time. 50+ hrs a week (ususally more like 60+), schedules that keep changing, requirements that seem to move around more than a cornered mouse who just downed its body weight in sugar.
We’ve all been there, it’s part of what we do.
In situations like this we tend to look up and not around. It’s upper managements fault. They didn’t plan things, they didn’t budget time correctly, etc. But is it really their fault?
Lets take a step back and tell you more about me. I’m a Software Developer Manager and Architect. I’m the guy who buffers the developers from the business guys as much as possible. My job is to make the developers life easier, give them direction on where the project is going, help them grow. At the same time I’m buffering them from external pressures (sometimes very heavy), working to set realistic business goals that meet the customers needs (without killing the developers), and basically trying to please everyone and keep everyone happy. (Yeah, I code a good bit as well ;))
OK, back to our regularly schedule program:
When a project hits a crunch time that wasn’t expected or management waits till the last second to make a decision, is it entirely their fault that the project hits a heavy crunch? No. You need to be able to look at yourself in the mirror and say, what could I have done differently?
It could be any number of factors and each one should be looked at to see how improvements can be made the next time. What I’m attempting to point out is that very rarely is something one persons fault. I truly believe that to be a good developer or manager you need to be able to look at yourself, critique, and improve. Accountability matters. You are human, you will get it wrong at some point. Accept that fact, do your best to do it right, and when you get it wrong, learn from it.
‘Why do we fall sir? So we might learn to pick ourselves up’
— Batman Begins
I should mention that sometimes you can’t control the way things play out. There will be situations where you are the low man on the pole and can’t apply the pressure that is needed. In cases like this you need to again look at yourself. Realize the situation you are in and adjust accordingly. Maybe someone else can apply pressure for you? Maybe there is some other way to get things done. There are whole books on this:
I know this isn’t a technical post like 99% of my other posts. My other posts are tricks to get things working or some code to make you a better developer. The thing is, development is more than just good code. It’s learning, exploring, interacting, planning, and so many other things. A truly good developer not only knows what they know well, they know what they don’t know and they admit when they got it wrong. Don’t be afraid of being wrong.
“I have not failed 700 times. I’ve succeeded in proving 700 ways how not to build a lightbulb.”
— Thomas Edison
If you have been paying attention to Tech News and the U.S. Government you may have heard about the ‘Do Not Track’ initiative that may become law. In a nutshell, the Do Not Track (DNT) initiative is a law that would require web sites to not track a user on sites they don’t visit.
Ok, take the time to read that very carefully; ‘sites they don’t visit’. You can still track the user on your site if they directly come to your site, but you cannot share tracking information with other sites. A good summary of this can be found at www.donottrack.us. Specifically, the site sums it up as: Do Not Track is a technology and policy proposal that enables users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms. So, if your website uses Google Analytics or shares info with Facebook, you need to know about this and you need to make some changes in your code.DNT is a browser based option that allows the user to say ‘Don’t track me’ and is supported in most newer browser. For instance, in FireFox 14.0.1 it can be found under Options -> Privacy -> Tracking
.
What then happens is that the browser, with every request, will send an additional HTTP Header to tell the web site to not track the user. The HTTP Header is named DNT
(Wikipedia List_of_HTTP_header_fields) and has two possible values: 0
or 1
.
DNT
is set to 1
then the user should not be tracked. Any other value should permit tracking (though the spec says the only other value is 0
). From there it is up to the web site to pay attention to the header or to ignore it
If you decide that you are going to support DNT, good for you! Most of us don’t like to be tracked so we should lead by example! Also, it’s very easy to support!
Since most of the tracking code (such as Google Analytics) is in the presentation layer (.html/.jsp/.gsp/etc) it makes sense to place the DNT check/code there as well. Nothing is easier in this case than a custom tag to accomplish this. The tag (which means that if you have your tracking code in HTML you will have to change it to a .jsp/.gsp or some other dynamic page) simply needs to check if DNT == 1 and if it does, don’t render what is in the body of the tag. Let’s first take a look at the final result:<track:trackable> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'xxxxxxx']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </track:trackable>
Time to break it down:
<track:trackable>
This is our custom tag. Anything that exists within it will be rendered only if it is OK to track the user (DNT != 1)
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'xxxxxxx']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
This is the code to have in the page if tracking is allowed (in this case it’s Google Analytics)
</track:trackable>
This end our tag
Now let’s look at the code for the tag. Again, this is GRAILS taglib code but I’ll go over it step-by-step so you can translate it to whatever you want.
class TrackTagLib { static namespace = "track" /** * if the user is trackable the contents between the tag are rendered */ def trackable = { attrs, body -> if (request.getHeader("DNT") != "1") { out << body() } } }
Wow that’s simple isn’t it? Break it down:
class TrackTagLib {
Define the class
static namespace = "track"
This names the tag prefix for us (the track
in <track:trackable>
)
def trackable = { attrs, body ->
Defines the trackable
method and gets passed in two variables:
if (request.getHeader("DNT") != "1") {
If the HTTP Header DNT
does not equal 1 (i.e. we are allowed to track)
Note that in some languages you made need a null check for if the DNT header even exists before you check its value (not needed in Groovy/GRAILS)
out << body()
Since we are allowed to track, render the body of what is between the <track:trackable>
tag.
Not to difficult and very easy to use. Putting the code in a taglib allows you to use it on any of your pages so that you can quickly and easily support DNT.
I thought I should take a minute (just sit right there and let me tell you how I became the Prince of a town called Bel Air) to comment on Google Analytics and its Terms of Service (TOS).
Most developers do not read TOS agreements and you really should (see South Park Season 15:Humancentipad). One section in particular needs attention (currently section 7):
You will not (and will not allow any third party to) use the Service to track, collect or upload any data that personally identifies an individual (such as a name, email address or billing information), or other data which can be reasonably linked to such information by Google. You will have and abide by an appropriate Privacy Policy and will comply with all applicable laws and regulations relating to the collection of information from Visitors. You must post a Privacy Policy and that Privacy Policy must provide notice of Your use of cookies that are used to collect traffic data, and You must not circumvent any privacy features (e.g., an opt-out) that are part of the Service.
So, you may have a few action items:
So what happens if you get a directive that your site should be able to track the flow of a user through the site? What can you do?
Well, your actually OK. DNT allows you to still track a user, you just can’t give the info to sites the user hasn’t visited (or, to be more safe, anything other than your site). You can still record the user name and the page they visited so you can get a user page flow, but you can’t use something like Google Analytics to do this.
Or can you?
What you could do is record anonymous user page flow. Google Analytics supports the ability to pass in your own custom variables. One of those variables could be a Universal Unique ID (UUID) that could be used with Google Analytics filters to get an idea of user page flow.
PLEASE NOTE: The UUID cannot be related back to the actual user of the system. You can’t use their user name, db primary key, or ANYTHING that would allow you to tie the user to that UUID. That would violate the TOS.
So What do you do? Easy, you put the UUID in the users session and every time your Google Analytics code is called you pass the UUID. If you really think about this you will realize that technically you could at this point relate the user to the UUID if you were to dig into the sessions (via MBeans or some custom code) and you would be right. We would be violating the TOS at this point. At this point it is more of a “just don’t do it”.Since this is a little sketchy because it could be abused I will not be posting code on how to do this. You are on your own, please follow the TOS.
As you can see, it is very easy to introduce support for DNT. You can still track users on your site, but you cannot share it with other sites.
Take the time to be considerate to the users and implement DNT.
I recently had a problem where I needed to intercept any AJAX based calls from the browser to, well, anything. After a lot of Googling and digging through StackOverflow all I ever found were suggestions for event listeners specific to JQuery or partial solutions that didn’t really work.
There are many instances where you want to grab all the AJAX calls being made. For debugging, for interception to override existing functionality, for parameter injection, etc.The code I wrote is a JQuery plug in but should work for any AJAX calls even if not in JQuery code.
First, if you are not familiar with basic AJAX functionality, it’s all based around the XMLHttpRequest object. I suggest you read W3Schools and Mozilla’s descriptions.Lets look at the code now. Source code available here: ajaxInterceptor.js
/* * Util for intercepting all ajax calls * * Options: * open : { * fn : function() {}, // function to call when open is called * scope : xxx // Scope to execution function in * }, * send : { * fn : function() {}, // function to call when set is called * scope : xxx // Scope to execution function in * }, * setRequestHeader : { * fn : function() {}, // function to call when setRequestHeader is called * scope : xxx // Scope to execution function in * }, */ (function( $ ) { var defaultOptions = { open : { }, send : { }, setRequestHeader : { } } var options; var aiOpen = window.XMLHttpRequest.prototype.open; var aiSend = window.XMLHttpRequest.prototype.send; var aiSet = window.XMLHttpRequest.prototype.setRequestHeader; var recurrsion = false; var methods = { init : function(opts) { options = $.extend(true, defaultOptions, opts); methods.enable(); }, enable : function() { window.XMLHttpRequest.prototype.open = function(method,url,async,uname,pswd) { if (options.open.fn) { options.open.fn.call(options.open.scope?options.open.scope:this, method, url, async, uname, pswd); } aiOpen.call(this, method,url,async,uname,pswd); }; window.XMLHttpRequest.prototype.send = function(data) { if (options.send.fn && !recurrsion) { recurrsion = true; options.send.fn.call(options.send.scope?options.send.scope:this, data); recurrsion = false; } aiSend.call(this, data); }; window.XMLHttpRequest.prototype.setRequestHeader = function(key, value) { if (options.setRequestHeader.fn) { options.setRequestHeader.fn.call(options.setRequestHeader.scope?options.setRequestHeader.scope:this, key, value); } aiSet.call(this, key, value); }; }, disable : function() { window.XMLHttpRequest.prototype.open = aiOpen; window.XMLHttpRequest.prototype.send = aiSend; window.XMLHttpRequest.prototype.setRequestHeader = aiSet; } } $.fn.ajaxInterceptor = function( method ) { if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on ajaxInterceptor' ); } }; })(jQuery);
Lets go over the code so you can understand what is going on.
var defaultOptions = { open : { }, send : { }, setRequestHeader : { } }
First thing I do is set up the defaults that developers can override as well as ensure that I don’t get any ‘undefined’ errors in the javacript.
var options; var aiOpen = window.XMLHttpRequest.prototype.open; var aiSend = window.XMLHttpRequest.prototype.send; var aiSet = window.XMLHttpRequest.prototype.setRequestHeader; var recurrsion = false;
Next I define some variables but the key thing to pay attention to is the aiOpen
, aiSend
, and aiSet
variables. What this is actually doing is taking the XMLHttpRequest
‘s open, send, and setRequestHeader methods and placing them into variables. I can use this later to disable whatever I do in the code by reverting my changes to these variables.
init : function(opts) { options = $.extend(true, defaultOptions, opts); methods.enable(); },
The init() method does nothing more that take in any options the developer passes in, use them to override the defaultOptions and store the resulting combination of the two into the options
variable. After that it calls the enable() method.
window.XMLHttpRequest.prototype.open = function(method,url,async,uname,pswd) { if (options.open.fn) { options.open.fn.call(options.open.scope?options.open.scope:this, method, url, async, uname, pswd); } aiOpen.call(this, method,url,async,uname,pswd); };
The first part of the enable
method overrides the XMLHttpRequest
‘s open
method with my method. All my method does is call a method that the developer might have passed via the options and then calls the normal open
method. This way we can run whatever method we want before the open
is actually executed.
send
and setRequestHeader
do the exact same thing as open
so I won’t go over them (hope you don’t mind)
disable : function() { window.XMLHttpRequest.prototype.open = aiOpen; window.XMLHttpRequest.prototype.send = aiSend; window.XMLHttpRequest.prototype.setRequestHeader = aiSet; }
As I mentioned before, one of the main reasons I placed the methods in variables is to undo what I overwrote. In this case I’m just restoring the original functionality of the XMLHttpRequest object any time the disable
is called.
Lets now show a few ways this can be used. The most common would be to intercept any calls to the AJAX service on the server side. This is in XMLHttpRequest object terms, the send
call.
<script src="jquery-1.7.2.min.js"></script> <script src="ajaxInterceptor.js"></script> <script> $(document).ajaxInterceptor({ send : { fn : function(data) { alert('send'); } } });
First we need to bring in the JQuery library and of course the ajaxInterceptor library itself. Next we attach it to the document
as that is basically the root of what we want (basically we are just attaching it to one of the most parent objects but really we could attach it to anything).
alert('send')
any time send
is called.
That’s all there is to it! If you put your code in the fn: function
and it will be executed whenever any AJAX send
is called.
I should note that the data that is passed in is the same data object that would be passed to the underlying XMLHttpRequest send
function. Also, the send
can be passed a scope
if you want your method to execute in a specific scope.
The next example is how to disable the code if you ever need to.
$(document).ajaxInterceptor("disable");
Again, pretty easy. The code follows the standard JQuery plug-in methodology.
I hope this code helps you out. I really could not find any good examples anywhere that were as encompassing as this one.
When you are generating a lot of output files as I do on some of my projects you at some point want to archive them so you end creating an archive directory and placing the files in there (usually with something like org.apache.commons.io.FileUtils) and then have your network guys or admins set up backing up the archive (or have the archive itself on a fully backed-up network appliance).
This is great and works really well except that you are most likely wasting a ton of HD space because:
This is where tar and gzip (.tar.gz) can help you out!
I’m not going to go into how tar and gzip work here other than to say:
Knowing that you can see how they can work together. You tar up your archive directory and then gzip the resulting file to have a compressed version.
To pull this off I created my own FileUtils class that extends org.apache.commons.io.FileUtils and then use that class whenever I want what FileUtils gives me.
/** * Compress (tar.gz) the input file (or directory) to the output file * <p/> * * In the case of a directory all files within the directory (and all nested * directories) will be added to the archive * * @param file The file(s if a directory) to compress * @param output The resulting output file (should end in .tar.gz) * @throws IOException */ public static void compressFile(File file, File output) throws IOException { ArrayList<File> list = new ArrayList<File>(1); list.add(file); compressFiles(list, output); } /** * Compress (tar.gz) the input files to the output file * * @param files The files to compress * @param output The resulting output file (should end in .tar.gz) * @throws IOException */ public static void compressFiles(Collection<File> files, File output) throws IOException { LOG.debug("Compressing "+files.size() + " to "+output.getAbsoluteFile()); // Create the output stream for the output file FileOutputStream fos = new FileOutputStream(output); // Wrap the output file stream in streams that will tar and gzip everything TarArchiveOutputStream taos = new TarArchiveOutputStream( new GZIPOutputStream(new BufferedOutputStream(fos))); // TAR has an 8 gig file limit by default, this gets around that taos.setBigNumberMode(TarArchiveOutputStream.BIGNUMBER_STAR); // to get past the 8 gig limit // TAR originally didn't support long file names, so enable the support for it taos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU); // Get to putting all the files in the compressed output file for (File f : files) { addFilesToCompression(taos, f, "."); } // Close everything up taos.close(); fos.close(); } /** * Does the work of compression and going recursive for nested directories * <p/> * * Borrowed heavily from http://www.thoughtspark.org/node/53 * * @param taos The archive * @param file The file to add to the archive * @param dir The directory that should serve as the parent directory in the archivew * @throws IOException */ private static void addFilesToCompression(TarArchiveOutputStream taos, File file, String dir) throws IOException { // Create an entry for the file taos.putArchiveEntry(new TarArchiveEntry(file, dir+FILE_SEPARATOR+file.getName())); if (file.isFile()) { // Add the file to the archive BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file)); IOUtils.copy(bis, taos); taos.closeArchiveEntry(); bis.close(); } else if (file.isDirectory()) { // close the archive entry taos.closeArchiveEntry(); // go through all the files in the directory and using recursion, add them to the archive for (File childFile : file.listFiles()) { addFilesToCompression(taos, childFile, file.getName()); } } }
As you can see, it’s pretty easy to create an compressed version of your files. Trust me when I say this is worth it. One project had a daily file creation of 13 gigs for archiving. Once compressed it was less than 2 gigs. That is a savings of 4,015 gigs a year. That’s huge for such a small thing.