# Call .done() at the end of your promise chains

tl;dr If you're using the q promise library, you need to either return your promise from a function, or you need to call .done().

The q promise library is widely used in Node applications, especially those that use AngularJS on the front end. This is because AngularJS has a version of q built into it.

Here's a mistake using q I've made myself, one I've noticed many others make. The q documentation is pretty clear, but it is also long and buries some important details alongside information that will be irrelevant for most developers.

Consider the following use of promises:

/** doSomethingAsynchronousWithProcessedFile.js */

function getAndProcessFile(){
return getFile().then(processFile());
}

function doSomethingAsynchronousWithProcessedFile(action, processedFile){
throw new Error("Action " + action + " not supported.");
}

var action = "save";

getAndProcessFile().then(function(processedFile){
return doSomethingAsynchronousWithProcessedFile(action, processedFile);
});


Assume that no errors occur in the code until doSomethingAsynchronousWithProcessedFile().

So what happens if we run this code? Answer: nothing, at least if we're using q. The file will be acquired and processed, but nothing else will happen. The error thrown in the doSomething function won't show up anywhere. It will be swallowed.

## Stop swallowing my errors

How do stop errors from being swallowed? If one has any experience with q at all, one might propose this refactor:

getAndProcessFile().then(function(processedFile){
return doSomethingAsynchronousWithProcessedFile(action, processedFile);
}).catch(function(err){     //fail if we're using AngularJS
//Log the error
console.log(err);
});


Does it work? Sort of. At least now we get an indication of the error. You might think we could also add a catch block to getAndProcessFile() if we need to do something different about errors that occur in the execution of the getting and processing of the files.

catch() ensures the errors further up the promise chain won't be swallowed. But it doesn't ensure that for all errors in the chain. In particular, it won't help in two cases:

1. If an error is thrown in catch() itself.
2. If an error occurs further down the chain.

How does 1 occur? Here's an example, pretty close to one I've seen in the wild:

getAndProcessFile().then(function(processedFile){
return doSomethingAsynchronousWithProcessedFile(action, processedFile);
}).catch(function(err){
throw err;      //this error is going NOWHERE
})


This code effectively mutes the catch block, negating its usefulness. That's not to say you never want to rethrow an error inside a catch block. Maybe you do. Maybe the type of error matters. Maybe some errors just need to be logged, but others need to be rethrown. Regardless, you can't JUST rethrow the error. You need to end the promise chain.

How does 2 occur? Here's another example:

getAndProcessFile().then(function(processedFile){
//Suppose this call returns the processedFile after performing the action on it.
return doSomethingAsynchronousWithProcessedFile(action, processedFile);
}).catch(function(err){
console.log(err);
}).then(function(processedFile){
return doSomethingElseAsynchronousWithProcessedFile(processedFile);
});

function doSomethingElseAsynchronousWithProcessedFile(processedFile){
var promises = [];
//Do a whole bunch of different asynchronous things with the processedFile, storing a promise for each in a promises array.
promises.push(someAsynchronousThing(processedFile));
promises.push(anotherAsynchronousThing(processedFile));
promises.push(yetAnother(processedFile));
//Etc.

//Return one promise that is fulfilled when all the component promises are fulfilled.
return Q.all(promises);
}


The function doSomethingElseAsynchronousWithProcessedFile() might push literally dozens of other promises onto its array. Any one of those promises might itself be the result of a long promise chain, and some of those chains might not use a catch block to catch errors.

What happens if that's the case? Those errors will get swallowed.

## Okay, really. Stop swallowing my errors

Even if we could guarantee a catch() after every then(), through all the chains, that wouldn't be enough to stop errors from being swallowed. Because those catch blocks might still be throwing errors on their own, whether accidentally or on purpose.

What if we include a catch block at the very end of the top level promise chain? Better, maybe, but presumably we still want to recognize the possibility of an error getting thrown in that block.

Solution: a promise must be the return value of some function, or you must be .done() with it.

getAndProcessFile().then(function(processedFile){
return doSomethingAsynchronousWithProcessedFile(action, processedFile);
}).then(function(processedFile){
return doSomethingElseAsynchronousWithProcessedFile(processedFile);
}).catch(function(err){
if(err instanceof SeriousError) {
throw err;
} else {
console.log(err.message || err);
}
}).done();


If the error is serious, we rethrow it. If it isn't, we log it. But to ensure that the serious errors bring the program to a halt, we have to use done() to make sure they bubble all the way up.

Without done() here, we would have the worst of all possible worlds: our non-serious errors would get logged, and our serious errors would be swallowed.

## What about finally()?

You can use finally() in your chain to execute code that needs to happen whether or not an error occurred in the chain. But what if an error occurs in the finally() block, whether accidentally or on purpose?

In short, you still need to use done().

# Testing complex services in both Angular and Node

Controllers are supposed to be small. The complex business logic goes in services, factories, or providers. So goes conventional Angular wisdom.According to Robert Martin's Clean Code, functions are supposed to be really small. In JavaScript, there is some tension between these two aims. Add in a requirement that services be fully testable and you've got a real dilemma.

Here's the problem. A service should interact with controllers through a thin API. From a controller's perspective, a service should be more or less a black box. Within that black box, the service should do what it needs to do using a myriad of different, typically small functions. By design, those small functions will not be exposed to the controller or to any other component of the application.

A typical design that fits this description (not Angular specific) might look like this:

function someService(directory){
this.directory = directory;
this.getAllFilesFromDirectory();
this.convertAllFilesToModules();
}

someService.prototype.getAllFilesFromDirectory = function(){
return files;
}

someService.prototype.convertAllFilesToModules = function(){
var modules = [];
files.forEach(function(file){
modules.push(someUtilityFunctionThatTakesAFileAndReturnsAModule(file));
});
}


The first question to ask is where the utility functions should go. And, obviously, there could be a whole bunch of them, not just two as in this example. The utility functions do most of the real work, and if we're being clean about it, each of them should be small and only do a little bit of work. Linking the functions together -- running them in a certain sequence, for example -- will yield the output that the overarching API should make available to consumers of the service.

But, I'd argue, for just that reason, the utility functions shouldn't be part of the API -- that is, they shouldn't go on someService's prototype. The only functions that should go on the prototype are high level functions that the service consumer will access (e.g. an Angular controller.)

So if we don't put them on the prototype, where do we define them? We could simply define them in the scope of the same file as the service, but outside the scope of the service itself. At the top or the bottom of the file, we could have a large list of functions:

function someUtilityFunctionThatGetsFilesFromADirectory(dir){

}

function someOtherUtilityFunction(){

}

function anotherUtilityFunction(){

}


But this is extremely problematic as well. We're going to want to unit test each of those functions. We won't be able to do that unless we somehow make the functions accessible. The testing framework -- Jasmine, for example -- needs to access the functions in some way, but it won't be able to do that if it is stuck behind the same API as the service's regular consumers.

What's the solution? In Node, I used a closure like this one:

"use strict";

var FromDirectoryProvider = (function(){

//This allows us to keep the utility functions in a separate module, where they are testable, but still use them in the closure.
var provider = require("./from-directory-utils");
// getModulesFromDir is a utility function that is simply exported from from-directory.utils.js

provider.fn = function(directory){
var allModules, useFunctionNames;
if(typeof directory == "string"){
allModules = provider.getModulesFromDir(directory, true);
useFunctionNames = true;
} else if(typeof directory == "object"){
if(!directory.dir) throw new Error("Property 'dir' is required.");
directory.recursive = directory.recursive || true;
directory.useFunctionNames = directory.useFunctionNames || true;
allModules = provider.getModulesFromDir(directory.dir, directory.recursive);
}

provider.processModules(allModules, useFunctionNames);
}

return provider.fn;
})();

module.exports = FromDirectoryProvider;


The benefit to this approach is that the utility functions stay easily accessible within the closure, but are not accessible outside it. However, during testing, it is easy to load from-directory-utils and test every one of the functions on it.